00:00:00.000 Started by upstream project "autotest-per-patch" build number 132366 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.312 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.312 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.305 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.318 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.332 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.332 > git config core.sparsecheckout # timeout=10 00:00:07.345 > git read-tree -mu HEAD # timeout=10 00:00:07.362 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.384 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.384 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.471 [Pipeline] Start of Pipeline 00:00:07.485 [Pipeline] library 00:00:07.487 Loading library shm_lib@master 00:00:07.487 Library shm_lib@master is cached. Copying from home. 00:00:07.503 [Pipeline] node 00:00:07.515 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.516 [Pipeline] { 00:00:07.524 [Pipeline] catchError 00:00:07.525 [Pipeline] { 00:00:07.535 [Pipeline] wrap 00:00:07.544 [Pipeline] { 00:00:07.554 [Pipeline] stage 00:00:07.555 [Pipeline] { (Prologue) 00:00:07.818 [Pipeline] sh 00:00:08.103 + logger -p user.info -t JENKINS-CI 00:00:08.126 [Pipeline] echo 00:00:08.128 Node: WFP6 00:00:08.138 [Pipeline] sh 00:00:08.441 [Pipeline] setCustomBuildProperty 00:00:08.456 [Pipeline] echo 00:00:08.458 Cleanup processes 00:00:08.464 [Pipeline] sh 00:00:08.749 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.749 2376500 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.764 [Pipeline] sh 00:00:09.052 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.052 ++ grep -v 'sudo pgrep' 00:00:09.052 ++ awk '{print $1}' 00:00:09.052 + sudo kill -9 00:00:09.052 + true 00:00:09.066 [Pipeline] cleanWs 00:00:09.076 [WS-CLEANUP] Deleting project workspace... 00:00:09.076 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.082 [WS-CLEANUP] done 00:00:09.087 [Pipeline] setCustomBuildProperty 00:00:09.100 [Pipeline] sh 00:00:09.381 + sudo git config --global --replace-all safe.directory '*' 00:00:09.482 [Pipeline] httpRequest 00:00:10.201 [Pipeline] echo 00:00:10.203 Sorcerer 10.211.164.20 is alive 00:00:10.214 [Pipeline] retry 00:00:10.217 [Pipeline] { 00:00:10.233 [Pipeline] httpRequest 00:00:10.237 HttpMethod: GET 00:00:10.238 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.238 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.263 Response Code: HTTP/1.1 200 OK 00:00:10.263 Success: Status code 200 is in the accepted range: 200,404 00:00:10.264 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.359 [Pipeline] } 00:00:36.382 [Pipeline] // retry 00:00:36.391 [Pipeline] sh 00:00:36.679 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.695 [Pipeline] httpRequest 00:00:37.101 [Pipeline] echo 00:00:37.103 Sorcerer 10.211.164.20 is alive 00:00:37.114 [Pipeline] retry 00:00:37.117 [Pipeline] { 00:00:37.133 [Pipeline] httpRequest 00:00:37.137 HttpMethod: GET 00:00:37.138 URL: http://10.211.164.20/packages/spdk_c02c5e04b33c5c72693b843c1a43be5e2c38465d.tar.gz 00:00:37.138 Sending request to url: http://10.211.164.20/packages/spdk_c02c5e04b33c5c72693b843c1a43be5e2c38465d.tar.gz 00:00:37.154 Response Code: HTTP/1.1 200 OK 00:00:37.154 Success: Status code 200 is in the accepted range: 200,404 00:00:37.155 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c02c5e04b33c5c72693b843c1a43be5e2c38465d.tar.gz 00:01:05.225 [Pipeline] } 00:01:05.242 [Pipeline] // retry 00:01:05.249 [Pipeline] sh 00:01:05.537 + tar --no-same-owner -xf spdk_c02c5e04b33c5c72693b843c1a43be5e2c38465d.tar.gz 00:01:08.092 [Pipeline] sh 00:01:08.375 + git -C spdk log --oneline -n5 00:01:08.375 c02c5e04b scripts/bash-completion: Speed up rpc lookup 00:01:08.375 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:08.375 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:01:08.375 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:08.375 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:08.386 [Pipeline] } 00:01:08.400 [Pipeline] // stage 00:01:08.408 [Pipeline] stage 00:01:08.411 [Pipeline] { (Prepare) 00:01:08.428 [Pipeline] writeFile 00:01:08.443 [Pipeline] sh 00:01:08.796 + logger -p user.info -t JENKINS-CI 00:01:08.809 [Pipeline] sh 00:01:09.092 + logger -p user.info -t JENKINS-CI 00:01:09.105 [Pipeline] sh 00:01:09.388 + cat autorun-spdk.conf 00:01:09.388 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.388 SPDK_TEST_NVMF=1 00:01:09.388 SPDK_TEST_NVME_CLI=1 00:01:09.388 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.388 SPDK_TEST_NVMF_NICS=e810 00:01:09.388 SPDK_TEST_VFIOUSER=1 00:01:09.388 SPDK_RUN_UBSAN=1 00:01:09.389 NET_TYPE=phy 00:01:09.396 RUN_NIGHTLY=0 00:01:09.401 [Pipeline] readFile 00:01:09.429 [Pipeline] withEnv 00:01:09.433 [Pipeline] { 00:01:09.445 [Pipeline] sh 00:01:09.729 + set -ex 00:01:09.730 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.730 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.730 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.730 ++ SPDK_TEST_NVMF=1 00:01:09.730 ++ SPDK_TEST_NVME_CLI=1 00:01:09.730 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.730 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.730 ++ SPDK_TEST_VFIOUSER=1 00:01:09.730 ++ SPDK_RUN_UBSAN=1 00:01:09.730 ++ NET_TYPE=phy 00:01:09.730 ++ RUN_NIGHTLY=0 00:01:09.730 + case $SPDK_TEST_NVMF_NICS in 00:01:09.730 + DRIVERS=ice 00:01:09.730 + [[ tcp == \r\d\m\a ]] 00:01:09.730 + [[ -n ice ]] 00:01:09.730 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.730 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:16.303 rmmod: ERROR: Module irdma is not currently loaded 00:01:16.303 rmmod: ERROR: Module i40iw is not currently loaded 00:01:16.303 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:16.303 + true 00:01:16.303 + for D in $DRIVERS 00:01:16.303 + sudo modprobe ice 00:01:16.303 + exit 0 00:01:16.313 [Pipeline] } 00:01:16.327 [Pipeline] // withEnv 00:01:16.332 [Pipeline] } 00:01:16.349 [Pipeline] // stage 00:01:16.359 [Pipeline] catchError 00:01:16.361 [Pipeline] { 00:01:16.376 [Pipeline] timeout 00:01:16.376 Timeout set to expire in 1 hr 0 min 00:01:16.378 [Pipeline] { 00:01:16.393 [Pipeline] stage 00:01:16.395 [Pipeline] { (Tests) 00:01:16.411 [Pipeline] sh 00:01:16.699 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.699 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.699 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.699 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.699 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.699 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.699 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.699 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.699 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:16.699 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.699 + source /etc/os-release 00:01:16.699 ++ NAME='Fedora Linux' 00:01:16.699 ++ VERSION='39 (Cloud Edition)' 00:01:16.699 ++ ID=fedora 00:01:16.699 ++ VERSION_ID=39 00:01:16.699 ++ VERSION_CODENAME= 00:01:16.699 ++ PLATFORM_ID=platform:f39 00:01:16.699 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.699 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.699 ++ LOGO=fedora-logo-icon 00:01:16.699 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.699 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.699 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.699 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.699 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.699 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.700 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.700 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.700 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.700 ++ SUPPORT_END=2024-11-12 00:01:16.700 ++ VARIANT='Cloud Edition' 00:01:16.700 ++ VARIANT_ID=cloud 00:01:16.700 + uname -a 00:01:16.700 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.700 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.242 Hugepages 00:01:19.242 node hugesize free / total 00:01:19.242 node0 1048576kB 0 / 0 00:01:19.242 node0 2048kB 0 / 0 00:01:19.242 node1 1048576kB 0 / 0 00:01:19.242 node1 2048kB 0 / 0 00:01:19.242 00:01:19.242 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.242 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:19.242 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:19.243 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:19.243 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:19.243 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:19.243 + rm -f /tmp/spdk-ld-path 00:01:19.243 + source autorun-spdk.conf 00:01:19.243 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.243 ++ SPDK_TEST_NVMF=1 00:01:19.243 ++ SPDK_TEST_NVME_CLI=1 00:01:19.243 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.243 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.243 ++ SPDK_TEST_VFIOUSER=1 00:01:19.243 ++ SPDK_RUN_UBSAN=1 00:01:19.243 ++ NET_TYPE=phy 00:01:19.243 ++ RUN_NIGHTLY=0 00:01:19.243 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.243 + [[ -n '' ]] 00:01:19.243 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.243 + for M in /var/spdk/build-*-manifest.txt 00:01:19.243 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:19.243 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.243 + for M in /var/spdk/build-*-manifest.txt 00:01:19.243 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.243 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.243 + for M in /var/spdk/build-*-manifest.txt 00:01:19.243 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.243 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.243 ++ uname 00:01:19.243 + [[ Linux == \L\i\n\u\x ]] 00:01:19.243 + sudo dmesg -T 00:01:19.503 + sudo dmesg --clear 00:01:19.503 + dmesg_pid=2377950 00:01:19.503 + [[ Fedora Linux == FreeBSD ]] 00:01:19.503 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.503 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.503 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.503 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.503 + sudo dmesg -Tw 00:01:19.503 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.503 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.503 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.503 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.503 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.503 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.503 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.503 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.503 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.503 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.503 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.503 09:39:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:19.503 09:39:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:19.503 09:39:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:19.503 09:39:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.503 09:39:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.503 09:39:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:19.503 09:39:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.503 09:39:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.503 09:39:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.503 09:39:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.503 09:39:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.503 09:39:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.503 09:39:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.503 09:39:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.503 09:39:53 -- paths/export.sh@5 -- $ export PATH 00:01:19.503 09:39:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.503 09:39:53 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.503 09:39:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:19.503 09:39:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091993.XXXXXX 00:01:19.503 09:39:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091993.Fv5sGG 00:01:19.503 09:39:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:19.503 09:39:53 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:19.503 09:39:53 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.503 09:39:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.503 09:39:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.503 09:39:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:19.503 09:39:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:19.503 09:39:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.503 09:39:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:19.503 09:39:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:19.503 09:39:53 -- pm/common@17 -- $ local monitor 00:01:19.503 09:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.503 09:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.503 09:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.503 09:39:53 -- pm/common@21 -- $ date +%s 00:01:19.503 09:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.503 09:39:53 -- pm/common@21 -- $ date +%s 00:01:19.503 09:39:53 -- pm/common@25 -- $ sleep 1 00:01:19.503 09:39:53 -- pm/common@21 -- $ date +%s 00:01:19.503 09:39:53 -- pm/common@21 -- $ date +%s 00:01:19.503 09:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091993 00:01:19.503 09:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091993 00:01:19.503 09:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091993 00:01:19.503 09:39:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091993 00:01:19.764 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091993_collect-vmstat.pm.log 00:01:19.764 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091993_collect-cpu-load.pm.log 00:01:19.764 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091993_collect-cpu-temp.pm.log 00:01:19.764 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091993_collect-bmc-pm.bmc.pm.log 00:01:20.703 09:39:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.703 09:39:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.703 09:39:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.703 09:39:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.703 09:39:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.703 Wed Nov 20 08:39:54 AM UTC 2024 00:01:20.703 09:39:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.703 v25.01-pre-200-gc02c5e04b 00:01:20.703 09:39:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.703 09:39:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.703 09:39:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.703 09:39:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.703 09:39:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.703 09:39:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.703 ************************************ 00:01:20.703 START TEST ubsan 00:01:20.703 ************************************ 00:01:20.703 09:39:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.703 using ubsan 00:01:20.703 00:01:20.703 real 0m0.000s 00:01:20.703 user 0m0.000s 00:01:20.703 sys 0m0.000s 00:01:20.703 09:39:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.703 09:39:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.703 ************************************ 00:01:20.703 END TEST ubsan 00:01:20.703 ************************************ 00:01:20.703 09:39:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.703 09:39:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.703 09:39:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.703 09:39:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.963 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.963 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.222 Using 'verbs' RDMA provider 00:01:34.014 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.233 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.233 Creating mk/config.mk...done. 00:01:46.233 Creating mk/cc.flags.mk...done. 00:01:46.233 Type 'make' to build. 00:01:46.233 09:40:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:46.233 09:40:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:46.233 09:40:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.233 09:40:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.233 ************************************ 00:01:46.233 START TEST make 00:01:46.233 ************************************ 00:01:46.233 09:40:19 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:46.801 make[1]: Nothing to be done for 'all'. 00:01:48.196 The Meson build system 00:01:48.196 Version: 1.5.0 00:01:48.196 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:48.196 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.196 Build type: native build 00:01:48.196 Project name: libvfio-user 00:01:48.196 Project version: 0.0.1 00:01:48.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.196 C linker for the host machine: cc ld.bfd 2.40-14 00:01:48.196 Host machine cpu family: x86_64 00:01:48.196 Host machine cpu: x86_64 00:01:48.196 Run-time dependency threads found: YES 00:01:48.196 Library dl found: YES 00:01:48.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.196 Run-time dependency json-c found: YES 0.17 00:01:48.196 Run-time dependency cmocka found: YES 1.1.7 00:01:48.196 Program pytest-3 found: NO 00:01:48.196 Program flake8 found: NO 00:01:48.196 Program misspell-fixer found: NO 00:01:48.196 Program restructuredtext-lint found: NO 00:01:48.196 Program valgrind found: YES (/usr/bin/valgrind) 00:01:48.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.196 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.196 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:48.196 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:48.196 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:48.196 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:48.196 Build targets in project: 8 00:01:48.196 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:48.196 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:48.196 00:01:48.196 libvfio-user 0.0.1 00:01:48.196 00:01:48.196 User defined options 00:01:48.196 buildtype : debug 00:01:48.196 default_library: shared 00:01:48.196 libdir : /usr/local/lib 00:01:48.196 00:01:48.196 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.764 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.764 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:48.764 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:48.764 [3/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:48.764 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:48.764 [5/37] Compiling C object samples/null.p/null.c.o 00:01:48.764 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:48.764 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:48.764 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:48.764 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:48.764 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:48.764 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:48.764 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:48.764 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:48.764 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:48.764 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:48.764 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:48.764 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:48.764 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:48.764 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:48.764 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:48.764 [21/37] Compiling C object samples/server.p/server.c.o 00:01:48.764 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:48.764 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:48.764 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:48.764 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:48.764 [26/37] Compiling C object samples/client.p/client.c.o 00:01:49.023 [27/37] Linking target samples/client 00:01:49.023 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:49.023 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:49.023 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:49.023 [31/37] Linking target test/unit_tests 00:01:49.023 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:49.023 [33/37] Linking target samples/server 00:01:49.023 [34/37] Linking target samples/lspci 00:01:49.023 [35/37] Linking target samples/null 00:01:49.023 [36/37] Linking target samples/gpio-pci-idio-16 00:01:49.023 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:49.023 INFO: autodetecting backend as ninja 00:01:49.023 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.282 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.541 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.541 ninja: no work to do. 00:01:54.819 The Meson build system 00:01:54.819 Version: 1.5.0 00:01:54.819 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.819 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.819 Build type: native build 00:01:54.819 Program cat found: YES (/usr/bin/cat) 00:01:54.819 Project name: DPDK 00:01:54.819 Project version: 24.03.0 00:01:54.819 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.819 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.819 Host machine cpu family: x86_64 00:01:54.819 Host machine cpu: x86_64 00:01:54.819 Message: ## Building in Developer Mode ## 00:01:54.819 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.819 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.819 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.819 Program python3 found: YES (/usr/bin/python3) 00:01:54.819 Program cat found: YES (/usr/bin/cat) 00:01:54.819 Compiler for C supports arguments -march=native: YES 00:01:54.819 Checking for size of "void *" : 8 00:01:54.819 Checking for size of "void *" : 8 (cached) 00:01:54.819 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.819 Library m found: YES 00:01:54.819 Library numa found: YES 00:01:54.819 Has header "numaif.h" : YES 00:01:54.819 Library fdt found: NO 00:01:54.819 Library execinfo found: NO 00:01:54.819 Has header "execinfo.h" : YES 00:01:54.819 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.819 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.819 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.820 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.820 Run-time dependency openssl found: YES 3.1.1 00:01:54.820 Run-time dependency libpcap found: YES 1.10.4 00:01:54.820 Has header "pcap.h" with dependency libpcap: YES 00:01:54.820 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.820 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.820 Compiler for C supports arguments -Wformat: YES 00:01:54.820 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.820 Compiler for C supports arguments -Wformat-security: NO 00:01:54.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.820 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.820 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.820 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.820 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.820 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.820 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.820 Compiler for C supports arguments -Wundef: YES 00:01:54.820 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.820 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.820 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.820 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.820 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.820 Program objdump found: YES (/usr/bin/objdump) 00:01:54.820 Compiler for C supports arguments -mavx512f: YES 00:01:54.820 Checking if "AVX512 checking" compiles: YES 00:01:54.820 Fetching value of define "__SSE4_2__" : 1 00:01:54.820 Fetching value of define "__AES__" : 1 00:01:54.820 Fetching value of define "__AVX__" : 1 00:01:54.820 Fetching value of define "__AVX2__" : 1 00:01:54.820 Fetching value of define "__AVX512BW__" : 1 00:01:54.820 Fetching value of define "__AVX512CD__" : 1 00:01:54.820 Fetching value of define "__AVX512DQ__" : 1 00:01:54.820 Fetching value of define "__AVX512F__" : 1 00:01:54.820 Fetching value of define "__AVX512VL__" : 1 00:01:54.820 Fetching value of define "__PCLMUL__" : 1 00:01:54.820 Fetching value of define "__RDRND__" : 1 00:01:54.820 Fetching value of define "__RDSEED__" : 1 00:01:54.820 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.820 Fetching value of define "__znver1__" : (undefined) 00:01:54.820 Fetching value of define "__znver2__" : (undefined) 00:01:54.820 Fetching value of define "__znver3__" : (undefined) 00:01:54.820 Fetching value of define "__znver4__" : (undefined) 00:01:54.820 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.820 Message: lib/log: Defining dependency "log" 00:01:54.820 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.820 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.820 Checking for function "getentropy" : NO 00:01:54.820 Message: lib/eal: Defining dependency "eal" 00:01:54.820 Message: lib/ring: Defining dependency "ring" 00:01:54.820 Message: lib/rcu: Defining dependency "rcu" 00:01:54.820 Message: lib/mempool: Defining dependency "mempool" 00:01:54.820 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.820 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.820 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.820 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.820 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.820 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.820 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.820 Compiler for C supports arguments -mpclmul: YES 00:01:54.820 Compiler for C supports arguments -maes: YES 00:01:54.820 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.820 Compiler for C supports arguments -mavx512bw: YES 00:01:54.820 Compiler for C supports arguments -mavx512dq: YES 00:01:54.820 Compiler for C supports arguments -mavx512vl: YES 00:01:54.820 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.820 Compiler for C supports arguments -mavx2: YES 00:01:54.820 Compiler for C supports arguments -mavx: YES 00:01:54.820 Message: lib/net: Defining dependency "net" 00:01:54.820 Message: lib/meter: Defining dependency "meter" 00:01:54.820 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.820 Message: lib/pci: Defining dependency "pci" 00:01:54.820 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.820 Message: lib/hash: Defining dependency "hash" 00:01:54.820 Message: lib/timer: Defining dependency "timer" 00:01:54.820 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.820 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.820 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.820 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.820 Message: lib/power: Defining dependency "power" 00:01:54.820 Message: lib/reorder: Defining dependency "reorder" 00:01:54.820 Message: lib/security: Defining dependency "security" 00:01:54.820 Has header "linux/userfaultfd.h" : YES 00:01:54.820 Has header "linux/vduse.h" : YES 00:01:54.820 Message: lib/vhost: Defining dependency "vhost" 00:01:54.820 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.820 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.820 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.820 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.820 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.820 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.820 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.820 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.820 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.820 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.820 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.820 Configuring doxy-api-html.conf using configuration 00:01:54.820 Configuring doxy-api-man.conf using configuration 00:01:54.820 Program mandb found: YES (/usr/bin/mandb) 00:01:54.820 Program sphinx-build found: NO 00:01:54.820 Configuring rte_build_config.h using configuration 00:01:54.820 Message: 00:01:54.820 ================= 00:01:54.820 Applications Enabled 00:01:54.820 ================= 00:01:54.820 00:01:54.820 apps: 00:01:54.820 00:01:54.820 00:01:54.820 Message: 00:01:54.820 ================= 00:01:54.820 Libraries Enabled 00:01:54.820 ================= 00:01:54.820 00:01:54.820 libs: 00:01:54.820 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.820 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.820 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.820 00:01:54.820 Message: 00:01:54.820 =============== 00:01:54.820 Drivers Enabled 00:01:54.820 =============== 00:01:54.820 00:01:54.820 common: 00:01:54.820 00:01:54.820 bus: 00:01:54.820 pci, vdev, 00:01:54.820 mempool: 00:01:54.821 ring, 00:01:54.821 dma: 00:01:54.821 00:01:54.821 net: 00:01:54.821 00:01:54.821 crypto: 00:01:54.821 00:01:54.821 compress: 00:01:54.821 00:01:54.821 vdpa: 00:01:54.821 00:01:54.821 00:01:54.821 Message: 00:01:54.821 ================= 00:01:54.821 Content Skipped 00:01:54.821 ================= 00:01:54.821 00:01:54.821 apps: 00:01:54.821 dumpcap: explicitly disabled via build config 00:01:54.821 graph: explicitly disabled via build config 00:01:54.821 pdump: explicitly disabled via build config 00:01:54.821 proc-info: explicitly disabled via build config 00:01:54.821 test-acl: explicitly disabled via build config 00:01:54.821 test-bbdev: explicitly disabled via build config 00:01:54.821 test-cmdline: explicitly disabled via build config 00:01:54.821 test-compress-perf: explicitly disabled via build config 00:01:54.821 test-crypto-perf: explicitly disabled via build config 00:01:54.821 test-dma-perf: explicitly disabled via build config 00:01:54.821 test-eventdev: explicitly disabled via build config 00:01:54.821 test-fib: explicitly disabled via build config 00:01:54.821 test-flow-perf: explicitly disabled via build config 00:01:54.821 test-gpudev: explicitly disabled via build config 00:01:54.821 test-mldev: explicitly disabled via build config 00:01:54.821 test-pipeline: explicitly disabled via build config 00:01:54.821 test-pmd: explicitly disabled via build config 00:01:54.821 test-regex: explicitly disabled via build config 00:01:54.821 test-sad: explicitly disabled via build config 00:01:54.821 test-security-perf: explicitly disabled via build config 00:01:54.821 00:01:54.821 libs: 00:01:54.821 argparse: explicitly disabled via build config 00:01:54.821 metrics: explicitly disabled via build config 00:01:54.821 acl: explicitly disabled via build config 00:01:54.821 bbdev: explicitly disabled via build config 00:01:54.821 bitratestats: explicitly disabled via build config 00:01:54.821 bpf: explicitly disabled via build config 00:01:54.821 cfgfile: explicitly disabled via build config 00:01:54.821 distributor: explicitly disabled via build config 00:01:54.821 efd: explicitly disabled via build config 00:01:54.821 eventdev: explicitly disabled via build config 00:01:54.821 dispatcher: explicitly disabled via build config 00:01:54.821 gpudev: explicitly disabled via build config 00:01:54.821 gro: explicitly disabled via build config 00:01:54.821 gso: explicitly disabled via build config 00:01:54.821 ip_frag: explicitly disabled via build config 00:01:54.821 jobstats: explicitly disabled via build config 00:01:54.821 latencystats: explicitly disabled via build config 00:01:54.821 lpm: explicitly disabled via build config 00:01:54.821 member: explicitly disabled via build config 00:01:54.821 pcapng: explicitly disabled via build config 00:01:54.821 rawdev: explicitly disabled via build config 00:01:54.821 regexdev: explicitly disabled via build config 00:01:54.821 mldev: explicitly disabled via build config 00:01:54.821 rib: explicitly disabled via build config 00:01:54.821 sched: explicitly disabled via build config 00:01:54.821 stack: explicitly disabled via build config 00:01:54.821 ipsec: explicitly disabled via build config 00:01:54.821 pdcp: explicitly disabled via build config 00:01:54.821 fib: explicitly disabled via build config 00:01:54.821 port: explicitly disabled via build config 00:01:54.821 pdump: explicitly disabled via build config 00:01:54.821 table: explicitly disabled via build config 00:01:54.821 pipeline: explicitly disabled via build config 00:01:54.821 graph: explicitly disabled via build config 00:01:54.821 node: explicitly disabled via build config 00:01:54.821 00:01:54.821 drivers: 00:01:54.821 common/cpt: not in enabled drivers build config 00:01:54.821 common/dpaax: not in enabled drivers build config 00:01:54.821 common/iavf: not in enabled drivers build config 00:01:54.821 common/idpf: not in enabled drivers build config 00:01:54.821 common/ionic: not in enabled drivers build config 00:01:54.821 common/mvep: not in enabled drivers build config 00:01:54.821 common/octeontx: not in enabled drivers build config 00:01:54.821 bus/auxiliary: not in enabled drivers build config 00:01:54.821 bus/cdx: not in enabled drivers build config 00:01:54.821 bus/dpaa: not in enabled drivers build config 00:01:54.821 bus/fslmc: not in enabled drivers build config 00:01:54.821 bus/ifpga: not in enabled drivers build config 00:01:54.821 bus/platform: not in enabled drivers build config 00:01:54.821 bus/uacce: not in enabled drivers build config 00:01:54.821 bus/vmbus: not in enabled drivers build config 00:01:54.821 common/cnxk: not in enabled drivers build config 00:01:54.821 common/mlx5: not in enabled drivers build config 00:01:54.821 common/nfp: not in enabled drivers build config 00:01:54.821 common/nitrox: not in enabled drivers build config 00:01:54.821 common/qat: not in enabled drivers build config 00:01:54.821 common/sfc_efx: not in enabled drivers build config 00:01:54.821 mempool/bucket: not in enabled drivers build config 00:01:54.821 mempool/cnxk: not in enabled drivers build config 00:01:54.821 mempool/dpaa: not in enabled drivers build config 00:01:54.821 mempool/dpaa2: not in enabled drivers build config 00:01:54.821 mempool/octeontx: not in enabled drivers build config 00:01:54.821 mempool/stack: not in enabled drivers build config 00:01:54.821 dma/cnxk: not in enabled drivers build config 00:01:54.821 dma/dpaa: not in enabled drivers build config 00:01:54.821 dma/dpaa2: not in enabled drivers build config 00:01:54.821 dma/hisilicon: not in enabled drivers build config 00:01:54.821 dma/idxd: not in enabled drivers build config 00:01:54.821 dma/ioat: not in enabled drivers build config 00:01:54.821 dma/skeleton: not in enabled drivers build config 00:01:54.821 net/af_packet: not in enabled drivers build config 00:01:54.821 net/af_xdp: not in enabled drivers build config 00:01:54.821 net/ark: not in enabled drivers build config 00:01:54.821 net/atlantic: not in enabled drivers build config 00:01:54.821 net/avp: not in enabled drivers build config 00:01:54.821 net/axgbe: not in enabled drivers build config 00:01:54.821 net/bnx2x: not in enabled drivers build config 00:01:54.821 net/bnxt: not in enabled drivers build config 00:01:54.821 net/bonding: not in enabled drivers build config 00:01:54.821 net/cnxk: not in enabled drivers build config 00:01:54.821 net/cpfl: not in enabled drivers build config 00:01:54.821 net/cxgbe: not in enabled drivers build config 00:01:54.821 net/dpaa: not in enabled drivers build config 00:01:54.821 net/dpaa2: not in enabled drivers build config 00:01:54.821 net/e1000: not in enabled drivers build config 00:01:54.821 net/ena: not in enabled drivers build config 00:01:54.821 net/enetc: not in enabled drivers build config 00:01:54.821 net/enetfec: not in enabled drivers build config 00:01:54.821 net/enic: not in enabled drivers build config 00:01:54.821 net/failsafe: not in enabled drivers build config 00:01:54.821 net/fm10k: not in enabled drivers build config 00:01:54.821 net/gve: not in enabled drivers build config 00:01:54.821 net/hinic: not in enabled drivers build config 00:01:54.821 net/hns3: not in enabled drivers build config 00:01:54.821 net/i40e: not in enabled drivers build config 00:01:54.821 net/iavf: not in enabled drivers build config 00:01:54.821 net/ice: not in enabled drivers build config 00:01:54.821 net/idpf: not in enabled drivers build config 00:01:54.821 net/igc: not in enabled drivers build config 00:01:54.821 net/ionic: not in enabled drivers build config 00:01:54.821 net/ipn3ke: not in enabled drivers build config 00:01:54.821 net/ixgbe: not in enabled drivers build config 00:01:54.821 net/mana: not in enabled drivers build config 00:01:54.821 net/memif: not in enabled drivers build config 00:01:54.821 net/mlx4: not in enabled drivers build config 00:01:54.821 net/mlx5: not in enabled drivers build config 00:01:54.821 net/mvneta: not in enabled drivers build config 00:01:54.821 net/mvpp2: not in enabled drivers build config 00:01:54.821 net/netvsc: not in enabled drivers build config 00:01:54.821 net/nfb: not in enabled drivers build config 00:01:54.821 net/nfp: not in enabled drivers build config 00:01:54.821 net/ngbe: not in enabled drivers build config 00:01:54.821 net/null: not in enabled drivers build config 00:01:54.821 net/octeontx: not in enabled drivers build config 00:01:54.821 net/octeon_ep: not in enabled drivers build config 00:01:54.821 net/pcap: not in enabled drivers build config 00:01:54.821 net/pfe: not in enabled drivers build config 00:01:54.821 net/qede: not in enabled drivers build config 00:01:54.821 net/ring: not in enabled drivers build config 00:01:54.821 net/sfc: not in enabled drivers build config 00:01:54.821 net/softnic: not in enabled drivers build config 00:01:54.821 net/tap: not in enabled drivers build config 00:01:54.821 net/thunderx: not in enabled drivers build config 00:01:54.821 net/txgbe: not in enabled drivers build config 00:01:54.821 net/vdev_netvsc: not in enabled drivers build config 00:01:54.822 net/vhost: not in enabled drivers build config 00:01:54.822 net/virtio: not in enabled drivers build config 00:01:54.822 net/vmxnet3: not in enabled drivers build config 00:01:54.822 raw/*: missing internal dependency, "rawdev" 00:01:54.822 crypto/armv8: not in enabled drivers build config 00:01:54.822 crypto/bcmfs: not in enabled drivers build config 00:01:54.822 crypto/caam_jr: not in enabled drivers build config 00:01:54.822 crypto/ccp: not in enabled drivers build config 00:01:54.822 crypto/cnxk: not in enabled drivers build config 00:01:54.822 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.822 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.822 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.822 crypto/mlx5: not in enabled drivers build config 00:01:54.822 crypto/mvsam: not in enabled drivers build config 00:01:54.822 crypto/nitrox: not in enabled drivers build config 00:01:54.822 crypto/null: not in enabled drivers build config 00:01:54.822 crypto/octeontx: not in enabled drivers build config 00:01:54.822 crypto/openssl: not in enabled drivers build config 00:01:54.822 crypto/scheduler: not in enabled drivers build config 00:01:54.822 crypto/uadk: not in enabled drivers build config 00:01:54.822 crypto/virtio: not in enabled drivers build config 00:01:54.822 compress/isal: not in enabled drivers build config 00:01:54.822 compress/mlx5: not in enabled drivers build config 00:01:54.822 compress/nitrox: not in enabled drivers build config 00:01:54.822 compress/octeontx: not in enabled drivers build config 00:01:54.822 compress/zlib: not in enabled drivers build config 00:01:54.822 regex/*: missing internal dependency, "regexdev" 00:01:54.822 ml/*: missing internal dependency, "mldev" 00:01:54.822 vdpa/ifc: not in enabled drivers build config 00:01:54.822 vdpa/mlx5: not in enabled drivers build config 00:01:54.822 vdpa/nfp: not in enabled drivers build config 00:01:54.822 vdpa/sfc: not in enabled drivers build config 00:01:54.822 event/*: missing internal dependency, "eventdev" 00:01:54.822 baseband/*: missing internal dependency, "bbdev" 00:01:54.822 gpu/*: missing internal dependency, "gpudev" 00:01:54.822 00:01:54.822 00:01:54.822 Build targets in project: 85 00:01:54.822 00:01:54.822 DPDK 24.03.0 00:01:54.822 00:01:54.822 User defined options 00:01:54.822 buildtype : debug 00:01:54.822 default_library : shared 00:01:54.822 libdir : lib 00:01:54.822 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.822 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.822 c_link_args : 00:01:54.822 cpu_instruction_set: native 00:01:54.822 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:54.822 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:54.822 enable_docs : false 00:01:54.822 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:54.822 enable_kmods : false 00:01:54.822 max_lcores : 128 00:01:54.822 tests : false 00:01:54.822 00:01:54.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.082 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:55.348 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.348 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.348 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.348 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.348 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:55.348 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.348 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:55.348 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.348 [9/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.348 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:55.348 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.348 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.348 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.348 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.348 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.348 [16/268] Linking static target lib/librte_log.a 00:01:55.348 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.348 [18/268] Linking static target lib/librte_kvargs.a 00:01:55.348 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.609 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.609 [21/268] Linking static target lib/librte_pci.a 00:01:55.609 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.609 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.609 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.876 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.876 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.876 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.876 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.876 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.876 [30/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.876 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.876 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.876 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.876 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.876 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.876 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.876 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.876 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.876 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.876 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.876 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.876 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.876 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.876 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.876 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.876 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.876 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.876 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.876 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.876 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.876 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.876 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.876 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.876 [54/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.876 [55/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.876 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.876 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.876 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.876 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.876 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.876 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.876 [62/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.876 [63/268] Linking static target lib/librte_ring.a 00:01:55.876 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.876 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.876 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.876 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.876 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.876 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.876 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.876 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.876 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.876 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.876 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.876 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.876 [76/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.876 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.876 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.876 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.876 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.876 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.876 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.876 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.876 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.876 [85/268] Linking static target lib/librte_meter.a 00:01:55.876 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.876 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.876 [88/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.136 [89/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.136 [90/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.136 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.136 [92/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.136 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.136 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.136 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.136 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.136 [97/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.136 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.136 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.136 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.136 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.136 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.136 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.136 [104/268] Linking static target lib/librte_telemetry.a 00:01:56.136 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.136 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.136 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.136 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.136 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.136 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.136 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.136 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.136 [113/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.136 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.136 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:56.136 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.136 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.136 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:56.136 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:56.136 [120/268] Linking static target lib/librte_rcu.a 00:01:56.136 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.136 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:56.136 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.136 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:56.136 [125/268] Linking static target lib/librte_eal.a 00:01:56.136 [126/268] Linking static target lib/librte_net.a 00:01:56.136 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.136 [128/268] Linking static target lib/librte_mempool.a 00:01:56.136 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.136 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.136 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.136 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.136 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.136 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.136 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.136 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.136 [137/268] Linking static target lib/librte_cmdline.a 00:01:56.136 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.395 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:56.395 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.395 [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.395 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.395 [143/268] Linking static target lib/librte_mbuf.a 00:01:56.395 [144/268] Linking target lib/librte_log.so.24.1 00:01:56.395 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.395 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.395 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.395 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.395 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.395 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.395 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.395 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.395 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.395 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:56.395 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.395 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.395 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.395 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.395 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:56.395 [160/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:56.395 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.395 [162/268] Linking static target lib/librte_timer.a 00:01:56.395 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.395 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:56.395 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.395 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.396 [167/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.396 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.396 [169/268] Linking target lib/librte_kvargs.so.24.1 00:01:56.396 [170/268] Linking static target lib/librte_power.a 00:01:56.396 [171/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.396 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.396 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.396 [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.396 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.396 [176/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.396 [177/268] Linking target lib/librte_telemetry.so.24.1 00:01:56.396 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.396 [179/268] Linking static target lib/librte_reorder.a 00:01:56.396 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.396 [181/268] Linking static target lib/librte_security.a 00:01:56.396 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.655 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.655 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.655 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.655 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.655 [187/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.655 [188/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.655 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.655 [190/268] Linking static target lib/librte_compressdev.a 00:01:56.655 [191/268] Linking static target lib/librte_dmadev.a 00:01:56.655 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.655 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.655 [194/268] Linking static target lib/librte_hash.a 00:01:56.655 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:56.655 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.655 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.655 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.655 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.655 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.655 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.655 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.655 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.655 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.655 [205/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.655 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:56.655 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.914 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.914 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.914 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:56.914 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.914 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.914 [213/268] Linking static target lib/librte_cryptodev.a 00:01:56.914 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.914 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.914 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.173 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.173 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.173 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.173 [220/268] Linking static target lib/librte_ethdev.a 00:01:57.173 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.173 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.432 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.691 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.628 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.628 [229/268] Linking static target lib/librte_vhost.a 00:01:58.888 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.268 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.585 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.155 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.155 [234/268] Linking target lib/librte_eal.so.24.1 00:02:06.415 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.415 [236/268] Linking target lib/librte_pci.so.24.1 00:02:06.415 [237/268] Linking target lib/librte_meter.so.24.1 00:02:06.415 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:06.415 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.415 [240/268] Linking target lib/librte_timer.so.24.1 00:02:06.415 [241/268] Linking target lib/librte_ring.so.24.1 00:02:06.415 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.415 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.415 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.415 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.415 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.415 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.415 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:06.415 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:06.675 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.675 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.675 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.675 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:06.934 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.934 [255/268] Linking target lib/librte_net.so.24.1 00:02:06.934 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:06.934 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:06.934 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:06.934 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.934 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.194 [261/268] Linking target lib/librte_hash.so.24.1 00:02:07.194 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:07.194 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:07.194 [264/268] Linking target lib/librte_security.so.24.1 00:02:07.194 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.194 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.194 [267/268] Linking target lib/librte_power.so.24.1 00:02:07.194 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:07.194 INFO: autodetecting backend as ninja 00:02:07.194 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:17.186 CC lib/ut_mock/mock.o 00:02:17.186 CC lib/ut/ut.o 00:02:17.186 CC lib/log/log.o 00:02:17.187 CC lib/log/log_flags.o 00:02:17.187 CC lib/log/log_deprecated.o 00:02:17.447 LIB libspdk_ut_mock.a 00:02:17.447 LIB libspdk_log.a 00:02:17.447 LIB libspdk_ut.a 00:02:17.447 SO libspdk_ut_mock.so.6.0 00:02:17.447 SO libspdk_ut.so.2.0 00:02:17.447 SO libspdk_log.so.7.1 00:02:17.447 SYMLINK libspdk_ut_mock.so 00:02:17.447 SYMLINK libspdk_ut.so 00:02:17.447 SYMLINK libspdk_log.so 00:02:17.707 CC lib/dma/dma.o 00:02:17.707 CC lib/util/base64.o 00:02:17.707 CC lib/util/bit_array.o 00:02:17.707 CC lib/util/cpuset.o 00:02:17.707 CC lib/util/crc16.o 00:02:17.707 CC lib/util/crc32.o 00:02:17.707 CC lib/ioat/ioat.o 00:02:17.707 CC lib/util/crc32c.o 00:02:17.707 CC lib/util/crc32_ieee.o 00:02:17.707 CC lib/util/crc64.o 00:02:17.707 CC lib/util/dif.o 00:02:17.707 CC lib/util/fd.o 00:02:17.707 CC lib/util/fd_group.o 00:02:17.707 CXX lib/trace_parser/trace.o 00:02:17.707 CC lib/util/file.o 00:02:17.707 CC lib/util/hexlify.o 00:02:17.707 CC lib/util/iov.o 00:02:17.707 CC lib/util/math.o 00:02:17.707 CC lib/util/net.o 00:02:17.707 CC lib/util/pipe.o 00:02:17.707 CC lib/util/strerror_tls.o 00:02:17.707 CC lib/util/string.o 00:02:17.707 CC lib/util/uuid.o 00:02:17.707 CC lib/util/zipf.o 00:02:17.707 CC lib/util/xor.o 00:02:17.707 CC lib/util/md5.o 00:02:17.966 CC lib/vfio_user/host/vfio_user.o 00:02:17.966 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.966 LIB libspdk_dma.a 00:02:17.966 SO libspdk_dma.so.5.0 00:02:17.966 LIB libspdk_ioat.a 00:02:17.966 SYMLINK libspdk_dma.so 00:02:17.966 SO libspdk_ioat.so.7.0 00:02:18.226 SYMLINK libspdk_ioat.so 00:02:18.226 LIB libspdk_vfio_user.a 00:02:18.226 SO libspdk_vfio_user.so.5.0 00:02:18.226 LIB libspdk_util.a 00:02:18.226 SYMLINK libspdk_vfio_user.so 00:02:18.226 SO libspdk_util.so.10.1 00:02:18.486 SYMLINK libspdk_util.so 00:02:18.486 LIB libspdk_trace_parser.a 00:02:18.486 SO libspdk_trace_parser.so.6.0 00:02:18.486 SYMLINK libspdk_trace_parser.so 00:02:18.745 CC lib/json/json_parse.o 00:02:18.745 CC lib/json/json_util.o 00:02:18.745 CC lib/json/json_write.o 00:02:18.745 CC lib/conf/conf.o 00:02:18.745 CC lib/env_dpdk/env.o 00:02:18.745 CC lib/env_dpdk/memory.o 00:02:18.745 CC lib/env_dpdk/pci.o 00:02:18.745 CC lib/idxd/idxd.o 00:02:18.745 CC lib/env_dpdk/init.o 00:02:18.745 CC lib/env_dpdk/threads.o 00:02:18.745 CC lib/idxd/idxd_user.o 00:02:18.745 CC lib/rdma_utils/rdma_utils.o 00:02:18.745 CC lib/idxd/idxd_kernel.o 00:02:18.745 CC lib/env_dpdk/pci_ioat.o 00:02:18.745 CC lib/env_dpdk/pci_virtio.o 00:02:18.745 CC lib/vmd/vmd.o 00:02:18.745 CC lib/env_dpdk/pci_vmd.o 00:02:18.745 CC lib/vmd/led.o 00:02:18.745 CC lib/env_dpdk/pci_idxd.o 00:02:18.745 CC lib/env_dpdk/pci_event.o 00:02:18.746 CC lib/env_dpdk/sigbus_handler.o 00:02:18.746 CC lib/env_dpdk/pci_dpdk.o 00:02:18.746 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.746 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:19.004 LIB libspdk_conf.a 00:02:19.004 SO libspdk_conf.so.6.0 00:02:19.004 LIB libspdk_json.a 00:02:19.004 LIB libspdk_rdma_utils.a 00:02:19.004 SO libspdk_rdma_utils.so.1.0 00:02:19.004 SYMLINK libspdk_conf.so 00:02:19.004 SO libspdk_json.so.6.0 00:02:19.004 SYMLINK libspdk_rdma_utils.so 00:02:19.004 SYMLINK libspdk_json.so 00:02:19.263 LIB libspdk_idxd.a 00:02:19.263 LIB libspdk_vmd.a 00:02:19.263 SO libspdk_idxd.so.12.1 00:02:19.263 SO libspdk_vmd.so.6.0 00:02:19.263 SYMLINK libspdk_idxd.so 00:02:19.263 SYMLINK libspdk_vmd.so 00:02:19.523 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.523 CC lib/rdma_provider/common.o 00:02:19.523 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.523 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.523 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:19.523 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.523 LIB libspdk_rdma_provider.a 00:02:19.523 LIB libspdk_jsonrpc.a 00:02:19.783 SO libspdk_rdma_provider.so.7.0 00:02:19.783 SO libspdk_jsonrpc.so.6.0 00:02:19.783 SYMLINK libspdk_rdma_provider.so 00:02:19.783 SYMLINK libspdk_jsonrpc.so 00:02:19.783 LIB libspdk_env_dpdk.a 00:02:19.783 SO libspdk_env_dpdk.so.15.1 00:02:20.042 SYMLINK libspdk_env_dpdk.so 00:02:20.042 CC lib/rpc/rpc.o 00:02:20.303 LIB libspdk_rpc.a 00:02:20.303 SO libspdk_rpc.so.6.0 00:02:20.303 SYMLINK libspdk_rpc.so 00:02:20.562 CC lib/trace/trace.o 00:02:20.562 CC lib/trace/trace_flags.o 00:02:20.562 CC lib/trace/trace_rpc.o 00:02:20.562 CC lib/notify/notify.o 00:02:20.563 CC lib/notify/notify_rpc.o 00:02:20.563 CC lib/keyring/keyring.o 00:02:20.563 CC lib/keyring/keyring_rpc.o 00:02:20.822 LIB libspdk_notify.a 00:02:20.822 SO libspdk_notify.so.6.0 00:02:20.822 LIB libspdk_trace.a 00:02:20.822 LIB libspdk_keyring.a 00:02:20.822 SO libspdk_trace.so.11.0 00:02:20.822 SO libspdk_keyring.so.2.0 00:02:20.822 SYMLINK libspdk_notify.so 00:02:21.082 SYMLINK libspdk_trace.so 00:02:21.082 SYMLINK libspdk_keyring.so 00:02:21.342 CC lib/thread/thread.o 00:02:21.342 CC lib/thread/iobuf.o 00:02:21.342 CC lib/sock/sock.o 00:02:21.342 CC lib/sock/sock_rpc.o 00:02:21.602 LIB libspdk_sock.a 00:02:21.602 SO libspdk_sock.so.10.0 00:02:21.602 SYMLINK libspdk_sock.so 00:02:22.171 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.171 CC lib/nvme/nvme_ctrlr.o 00:02:22.171 CC lib/nvme/nvme_fabric.o 00:02:22.171 CC lib/nvme/nvme_ns_cmd.o 00:02:22.171 CC lib/nvme/nvme_ns.o 00:02:22.171 CC lib/nvme/nvme_pcie_common.o 00:02:22.171 CC lib/nvme/nvme_pcie.o 00:02:22.171 CC lib/nvme/nvme_qpair.o 00:02:22.171 CC lib/nvme/nvme.o 00:02:22.171 CC lib/nvme/nvme_quirks.o 00:02:22.171 CC lib/nvme/nvme_transport.o 00:02:22.171 CC lib/nvme/nvme_discovery.o 00:02:22.171 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.171 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.171 CC lib/nvme/nvme_tcp.o 00:02:22.171 CC lib/nvme/nvme_opal.o 00:02:22.171 CC lib/nvme/nvme_io_msg.o 00:02:22.171 CC lib/nvme/nvme_poll_group.o 00:02:22.171 CC lib/nvme/nvme_zns.o 00:02:22.171 CC lib/nvme/nvme_stubs.o 00:02:22.171 CC lib/nvme/nvme_auth.o 00:02:22.171 CC lib/nvme/nvme_cuse.o 00:02:22.171 CC lib/nvme/nvme_vfio_user.o 00:02:22.171 CC lib/nvme/nvme_rdma.o 00:02:22.430 LIB libspdk_thread.a 00:02:22.430 SO libspdk_thread.so.11.0 00:02:22.430 SYMLINK libspdk_thread.so 00:02:22.689 CC lib/virtio/virtio.o 00:02:22.689 CC lib/virtio/virtio_vhost_user.o 00:02:22.689 CC lib/virtio/virtio_vfio_user.o 00:02:22.689 CC lib/init/json_config.o 00:02:22.689 CC lib/init/subsystem_rpc.o 00:02:22.689 CC lib/virtio/virtio_pci.o 00:02:22.689 CC lib/init/subsystem.o 00:02:22.689 CC lib/init/rpc.o 00:02:22.689 CC lib/vfu_tgt/tgt_endpoint.o 00:02:22.689 CC lib/vfu_tgt/tgt_rpc.o 00:02:22.689 CC lib/blob/request.o 00:02:22.689 CC lib/blob/blobstore.o 00:02:22.689 CC lib/fsdev/fsdev_io.o 00:02:22.689 CC lib/fsdev/fsdev.o 00:02:22.689 CC lib/blob/zeroes.o 00:02:22.689 CC lib/blob/blob_bs_dev.o 00:02:22.689 CC lib/fsdev/fsdev_rpc.o 00:02:22.689 CC lib/accel/accel_rpc.o 00:02:22.689 CC lib/accel/accel.o 00:02:22.689 CC lib/accel/accel_sw.o 00:02:22.948 LIB libspdk_init.a 00:02:22.948 SO libspdk_init.so.6.0 00:02:22.948 LIB libspdk_virtio.a 00:02:22.948 LIB libspdk_vfu_tgt.a 00:02:23.207 SO libspdk_virtio.so.7.0 00:02:23.207 SYMLINK libspdk_init.so 00:02:23.207 SO libspdk_vfu_tgt.so.3.0 00:02:23.207 SYMLINK libspdk_virtio.so 00:02:23.207 SYMLINK libspdk_vfu_tgt.so 00:02:23.207 LIB libspdk_fsdev.a 00:02:23.207 SO libspdk_fsdev.so.2.0 00:02:23.466 SYMLINK libspdk_fsdev.so 00:02:23.466 CC lib/event/app.o 00:02:23.466 CC lib/event/reactor.o 00:02:23.466 CC lib/event/log_rpc.o 00:02:23.466 CC lib/event/app_rpc.o 00:02:23.466 CC lib/event/scheduler_static.o 00:02:23.725 LIB libspdk_accel.a 00:02:23.725 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:23.725 SO libspdk_accel.so.16.0 00:02:23.725 LIB libspdk_nvme.a 00:02:23.725 SYMLINK libspdk_accel.so 00:02:23.725 LIB libspdk_event.a 00:02:23.725 SO libspdk_nvme.so.15.0 00:02:23.725 SO libspdk_event.so.14.0 00:02:23.984 SYMLINK libspdk_event.so 00:02:23.984 SYMLINK libspdk_nvme.so 00:02:23.984 CC lib/bdev/bdev.o 00:02:23.984 CC lib/bdev/bdev_rpc.o 00:02:23.984 CC lib/bdev/bdev_zone.o 00:02:23.984 CC lib/bdev/part.o 00:02:23.984 CC lib/bdev/scsi_nvme.o 00:02:24.242 LIB libspdk_fuse_dispatcher.a 00:02:24.242 SO libspdk_fuse_dispatcher.so.1.0 00:02:24.242 SYMLINK libspdk_fuse_dispatcher.so 00:02:24.810 LIB libspdk_blob.a 00:02:24.810 SO libspdk_blob.so.11.0 00:02:25.097 SYMLINK libspdk_blob.so 00:02:25.378 CC lib/lvol/lvol.o 00:02:25.378 CC lib/blobfs/blobfs.o 00:02:25.378 CC lib/blobfs/tree.o 00:02:25.997 LIB libspdk_bdev.a 00:02:25.997 SO libspdk_bdev.so.17.0 00:02:25.997 LIB libspdk_blobfs.a 00:02:25.997 SO libspdk_blobfs.so.10.0 00:02:25.997 LIB libspdk_lvol.a 00:02:25.997 SO libspdk_lvol.so.10.0 00:02:25.997 SYMLINK libspdk_bdev.so 00:02:25.997 SYMLINK libspdk_blobfs.so 00:02:25.997 SYMLINK libspdk_lvol.so 00:02:26.256 CC lib/scsi/lun.o 00:02:26.256 CC lib/scsi/dev.o 00:02:26.256 CC lib/nvmf/ctrlr.o 00:02:26.256 CC lib/scsi/port.o 00:02:26.256 CC lib/nvmf/ctrlr_discovery.o 00:02:26.256 CC lib/scsi/scsi.o 00:02:26.256 CC lib/nbd/nbd.o 00:02:26.256 CC lib/ftl/ftl_core.o 00:02:26.256 CC lib/nvmf/ctrlr_bdev.o 00:02:26.256 CC lib/ftl/ftl_init.o 00:02:26.256 CC lib/nbd/nbd_rpc.o 00:02:26.256 CC lib/scsi/scsi_bdev.o 00:02:26.256 CC lib/nvmf/subsystem.o 00:02:26.256 CC lib/ftl/ftl_layout.o 00:02:26.256 CC lib/scsi/scsi_pr.o 00:02:26.256 CC lib/nvmf/nvmf.o 00:02:26.256 CC lib/nvmf/nvmf_rpc.o 00:02:26.256 CC lib/scsi/scsi_rpc.o 00:02:26.256 CC lib/ftl/ftl_debug.o 00:02:26.256 CC lib/ftl/ftl_io.o 00:02:26.256 CC lib/scsi/task.o 00:02:26.256 CC lib/ftl/ftl_sb.o 00:02:26.256 CC lib/nvmf/transport.o 00:02:26.256 CC lib/ftl/ftl_l2p.o 00:02:26.256 CC lib/nvmf/tcp.o 00:02:26.256 CC lib/ftl/ftl_l2p_flat.o 00:02:26.256 CC lib/nvmf/mdns_server.o 00:02:26.256 CC lib/ftl/ftl_band.o 00:02:26.256 CC lib/ftl/ftl_nv_cache.o 00:02:26.256 CC lib/nvmf/stubs.o 00:02:26.256 CC lib/ublk/ublk.o 00:02:26.256 CC lib/nvmf/vfio_user.o 00:02:26.256 CC lib/ublk/ublk_rpc.o 00:02:26.256 CC lib/nvmf/rdma.o 00:02:26.256 CC lib/ftl/ftl_writer.o 00:02:26.256 CC lib/ftl/ftl_band_ops.o 00:02:26.256 CC lib/ftl/ftl_rq.o 00:02:26.256 CC lib/ftl/ftl_reloc.o 00:02:26.256 CC lib/nvmf/auth.o 00:02:26.256 CC lib/ftl/ftl_l2p_cache.o 00:02:26.256 CC lib/ftl/ftl_p2l.o 00:02:26.256 CC lib/ftl/ftl_p2l_log.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:26.256 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:26.256 CC lib/ftl/utils/ftl_conf.o 00:02:26.256 CC lib/ftl/utils/ftl_md.o 00:02:26.256 CC lib/ftl/utils/ftl_mempool.o 00:02:26.256 CC lib/ftl/utils/ftl_bitmap.o 00:02:26.256 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:26.256 CC lib/ftl/utils/ftl_property.o 00:02:26.256 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.256 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.256 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.256 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.256 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.256 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:26.256 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:26.256 CC lib/ftl/base/ftl_base_dev.o 00:02:26.256 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.257 CC lib/ftl/ftl_trace.o 00:02:26.824 LIB libspdk_nbd.a 00:02:26.824 LIB libspdk_scsi.a 00:02:27.082 SO libspdk_nbd.so.7.0 00:02:27.082 SO libspdk_scsi.so.9.0 00:02:27.082 LIB libspdk_ublk.a 00:02:27.082 SYMLINK libspdk_nbd.so 00:02:27.082 SO libspdk_ublk.so.3.0 00:02:27.082 SYMLINK libspdk_scsi.so 00:02:27.082 SYMLINK libspdk_ublk.so 00:02:27.340 LIB libspdk_ftl.a 00:02:27.340 SO libspdk_ftl.so.9.0 00:02:27.340 CC lib/vhost/vhost.o 00:02:27.340 CC lib/vhost/vhost_rpc.o 00:02:27.340 CC lib/vhost/vhost_scsi.o 00:02:27.340 CC lib/vhost/vhost_blk.o 00:02:27.340 CC lib/vhost/rte_vhost_user.o 00:02:27.340 CC lib/iscsi/conn.o 00:02:27.340 CC lib/iscsi/init_grp.o 00:02:27.340 CC lib/iscsi/iscsi.o 00:02:27.340 CC lib/iscsi/param.o 00:02:27.340 CC lib/iscsi/portal_grp.o 00:02:27.340 CC lib/iscsi/tgt_node.o 00:02:27.340 CC lib/iscsi/iscsi_subsystem.o 00:02:27.340 CC lib/iscsi/iscsi_rpc.o 00:02:27.340 CC lib/iscsi/task.o 00:02:27.599 SYMLINK libspdk_ftl.so 00:02:28.167 LIB libspdk_nvmf.a 00:02:28.167 SO libspdk_nvmf.so.20.0 00:02:28.167 LIB libspdk_vhost.a 00:02:28.167 SO libspdk_vhost.so.8.0 00:02:28.167 SYMLINK libspdk_nvmf.so 00:02:28.426 SYMLINK libspdk_vhost.so 00:02:28.426 LIB libspdk_iscsi.a 00:02:28.426 SO libspdk_iscsi.so.8.0 00:02:28.685 SYMLINK libspdk_iscsi.so 00:02:28.945 CC module/vfu_device/vfu_virtio.o 00:02:28.945 CC module/vfu_device/vfu_virtio_blk.o 00:02:28.945 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.945 CC module/vfu_device/vfu_virtio_rpc.o 00:02:28.945 CC module/vfu_device/vfu_virtio_scsi.o 00:02:28.945 CC module/vfu_device/vfu_virtio_fs.o 00:02:29.204 CC module/keyring/file/keyring.o 00:02:29.204 CC module/blob/bdev/blob_bdev.o 00:02:29.204 CC module/keyring/file/keyring_rpc.o 00:02:29.204 CC module/accel/iaa/accel_iaa.o 00:02:29.204 CC module/accel/ioat/accel_ioat.o 00:02:29.204 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.204 CC module/keyring/linux/keyring.o 00:02:29.204 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.204 CC module/keyring/linux/keyring_rpc.o 00:02:29.204 CC module/sock/posix/posix.o 00:02:29.204 CC module/accel/error/accel_error.o 00:02:29.204 CC module/accel/error/accel_error_rpc.o 00:02:29.204 CC module/fsdev/aio/fsdev_aio.o 00:02:29.204 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.204 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:29.204 CC module/fsdev/aio/linux_aio_mgr.o 00:02:29.204 LIB libspdk_env_dpdk_rpc.a 00:02:29.204 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.204 CC module/accel/dsa/accel_dsa.o 00:02:29.204 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.204 CC module/accel/dsa/accel_dsa_rpc.o 00:02:29.204 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.204 SYMLINK libspdk_env_dpdk_rpc.so 00:02:29.463 LIB libspdk_keyring_linux.a 00:02:29.463 LIB libspdk_keyring_file.a 00:02:29.463 LIB libspdk_scheduler_gscheduler.a 00:02:29.463 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.463 SO libspdk_keyring_file.so.2.0 00:02:29.463 SO libspdk_keyring_linux.so.1.0 00:02:29.463 SO libspdk_scheduler_gscheduler.so.4.0 00:02:29.463 LIB libspdk_accel_iaa.a 00:02:29.463 LIB libspdk_accel_ioat.a 00:02:29.463 LIB libspdk_accel_error.a 00:02:29.463 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:29.463 LIB libspdk_scheduler_dynamic.a 00:02:29.463 SYMLINK libspdk_keyring_file.so 00:02:29.463 SO libspdk_accel_iaa.so.3.0 00:02:29.463 SO libspdk_accel_error.so.2.0 00:02:29.463 SO libspdk_accel_ioat.so.6.0 00:02:29.463 SYMLINK libspdk_keyring_linux.so 00:02:29.463 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.463 LIB libspdk_accel_dsa.a 00:02:29.463 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.463 LIB libspdk_blob_bdev.a 00:02:29.463 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.463 SYMLINK libspdk_accel_error.so 00:02:29.463 SYMLINK libspdk_accel_iaa.so 00:02:29.463 SO libspdk_accel_dsa.so.5.0 00:02:29.463 SYMLINK libspdk_accel_ioat.so 00:02:29.463 SO libspdk_blob_bdev.so.11.0 00:02:29.463 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.721 SYMLINK libspdk_accel_dsa.so 00:02:29.721 SYMLINK libspdk_blob_bdev.so 00:02:29.721 LIB libspdk_vfu_device.a 00:02:29.721 SO libspdk_vfu_device.so.3.0 00:02:29.721 SYMLINK libspdk_vfu_device.so 00:02:29.721 LIB libspdk_fsdev_aio.a 00:02:29.721 SO libspdk_fsdev_aio.so.1.0 00:02:29.721 LIB libspdk_sock_posix.a 00:02:29.980 SO libspdk_sock_posix.so.6.0 00:02:29.980 SYMLINK libspdk_fsdev_aio.so 00:02:29.980 SYMLINK libspdk_sock_posix.so 00:02:29.980 CC module/bdev/malloc/bdev_malloc.o 00:02:29.980 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.980 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.980 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.980 CC module/bdev/error/vbdev_error.o 00:02:29.980 CC module/bdev/gpt/gpt.o 00:02:29.980 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.980 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.980 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.980 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.980 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.980 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.980 CC module/bdev/null/bdev_null_rpc.o 00:02:29.980 CC module/bdev/null/bdev_null.o 00:02:29.980 CC module/bdev/split/vbdev_split.o 00:02:29.980 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.980 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.980 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.980 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.980 CC module/bdev/delay/vbdev_delay.o 00:02:29.980 CC module/bdev/ftl/bdev_ftl.o 00:02:29.980 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.980 CC module/bdev/aio/bdev_aio.o 00:02:29.980 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.980 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.980 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.980 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.980 CC module/bdev/nvme/bdev_nvme.o 00:02:29.980 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.980 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.980 CC module/bdev/nvme/nvme_rpc.o 00:02:29.980 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.980 CC module/bdev/nvme/vbdev_opal.o 00:02:29.980 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.980 CC module/bdev/raid/bdev_raid.o 00:02:29.980 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.980 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.980 CC module/bdev/raid/raid1.o 00:02:29.980 CC module/bdev/raid/raid0.o 00:02:29.980 CC module/bdev/raid/concat.o 00:02:29.980 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.980 CC module/bdev/iscsi/bdev_iscsi.o 00:02:30.238 LIB libspdk_blobfs_bdev.a 00:02:30.238 LIB libspdk_bdev_split.a 00:02:30.497 SO libspdk_blobfs_bdev.so.6.0 00:02:30.497 LIB libspdk_bdev_error.a 00:02:30.497 LIB libspdk_bdev_null.a 00:02:30.497 SO libspdk_bdev_split.so.6.0 00:02:30.497 LIB libspdk_bdev_gpt.a 00:02:30.497 SO libspdk_bdev_gpt.so.6.0 00:02:30.497 SO libspdk_bdev_error.so.6.0 00:02:30.497 SO libspdk_bdev_null.so.6.0 00:02:30.497 SYMLINK libspdk_blobfs_bdev.so 00:02:30.497 SYMLINK libspdk_bdev_split.so 00:02:30.497 LIB libspdk_bdev_passthru.a 00:02:30.497 LIB libspdk_bdev_ftl.a 00:02:30.497 SYMLINK libspdk_bdev_error.so 00:02:30.497 SYMLINK libspdk_bdev_gpt.so 00:02:30.497 SYMLINK libspdk_bdev_null.so 00:02:30.497 LIB libspdk_bdev_zone_block.a 00:02:30.497 LIB libspdk_bdev_delay.a 00:02:30.497 LIB libspdk_bdev_aio.a 00:02:30.497 LIB libspdk_bdev_malloc.a 00:02:30.497 SO libspdk_bdev_ftl.so.6.0 00:02:30.497 SO libspdk_bdev_passthru.so.6.0 00:02:30.497 SO libspdk_bdev_delay.so.6.0 00:02:30.497 SO libspdk_bdev_zone_block.so.6.0 00:02:30.497 LIB libspdk_bdev_iscsi.a 00:02:30.497 SO libspdk_bdev_aio.so.6.0 00:02:30.497 SO libspdk_bdev_malloc.so.6.0 00:02:30.497 SYMLINK libspdk_bdev_passthru.so 00:02:30.497 SYMLINK libspdk_bdev_ftl.so 00:02:30.497 SO libspdk_bdev_iscsi.so.6.0 00:02:30.497 SYMLINK libspdk_bdev_delay.so 00:02:30.497 LIB libspdk_bdev_lvol.a 00:02:30.497 SYMLINK libspdk_bdev_aio.so 00:02:30.497 SYMLINK libspdk_bdev_zone_block.so 00:02:30.497 SYMLINK libspdk_bdev_malloc.so 00:02:30.497 LIB libspdk_bdev_virtio.a 00:02:30.497 SO libspdk_bdev_lvol.so.6.0 00:02:30.497 SYMLINK libspdk_bdev_iscsi.so 00:02:30.756 SO libspdk_bdev_virtio.so.6.0 00:02:30.756 SYMLINK libspdk_bdev_lvol.so 00:02:30.756 SYMLINK libspdk_bdev_virtio.so 00:02:31.014 LIB libspdk_bdev_raid.a 00:02:31.014 SO libspdk_bdev_raid.so.6.0 00:02:31.014 SYMLINK libspdk_bdev_raid.so 00:02:31.950 LIB libspdk_bdev_nvme.a 00:02:31.950 SO libspdk_bdev_nvme.so.7.1 00:02:32.209 SYMLINK libspdk_bdev_nvme.so 00:02:32.778 CC module/event/subsystems/vmd/vmd.o 00:02:32.778 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.778 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.778 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.778 CC module/event/subsystems/sock/sock.o 00:02:32.778 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:32.778 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.778 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.778 CC module/event/subsystems/keyring/keyring.o 00:02:32.778 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.778 LIB libspdk_event_vfu_tgt.a 00:02:32.778 LIB libspdk_event_keyring.a 00:02:32.778 LIB libspdk_event_sock.a 00:02:32.778 LIB libspdk_event_vmd.a 00:02:33.037 LIB libspdk_event_fsdev.a 00:02:33.037 LIB libspdk_event_vhost_blk.a 00:02:33.038 SO libspdk_event_vfu_tgt.so.3.0 00:02:33.038 LIB libspdk_event_scheduler.a 00:02:33.038 LIB libspdk_event_iobuf.a 00:02:33.038 SO libspdk_event_keyring.so.1.0 00:02:33.038 SO libspdk_event_vmd.so.6.0 00:02:33.038 SO libspdk_event_sock.so.5.0 00:02:33.038 SO libspdk_event_vhost_blk.so.3.0 00:02:33.038 SO libspdk_event_fsdev.so.1.0 00:02:33.038 SO libspdk_event_scheduler.so.4.0 00:02:33.038 SO libspdk_event_iobuf.so.3.0 00:02:33.038 SYMLINK libspdk_event_vfu_tgt.so 00:02:33.038 SYMLINK libspdk_event_keyring.so 00:02:33.038 SYMLINK libspdk_event_sock.so 00:02:33.038 SYMLINK libspdk_event_vmd.so 00:02:33.038 SYMLINK libspdk_event_vhost_blk.so 00:02:33.038 SYMLINK libspdk_event_fsdev.so 00:02:33.038 SYMLINK libspdk_event_scheduler.so 00:02:33.038 SYMLINK libspdk_event_iobuf.so 00:02:33.297 CC module/event/subsystems/accel/accel.o 00:02:33.558 LIB libspdk_event_accel.a 00:02:33.558 SO libspdk_event_accel.so.6.0 00:02:33.558 SYMLINK libspdk_event_accel.so 00:02:33.819 CC module/event/subsystems/bdev/bdev.o 00:02:34.078 LIB libspdk_event_bdev.a 00:02:34.078 SO libspdk_event_bdev.so.6.0 00:02:34.078 SYMLINK libspdk_event_bdev.so 00:02:34.338 CC module/event/subsystems/scsi/scsi.o 00:02:34.338 CC module/event/subsystems/nbd/nbd.o 00:02:34.338 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.338 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.338 CC module/event/subsystems/ublk/ublk.o 00:02:34.598 LIB libspdk_event_scsi.a 00:02:34.598 LIB libspdk_event_nbd.a 00:02:34.598 LIB libspdk_event_ublk.a 00:02:34.598 SO libspdk_event_nbd.so.6.0 00:02:34.598 SO libspdk_event_scsi.so.6.0 00:02:34.598 SO libspdk_event_ublk.so.3.0 00:02:34.598 LIB libspdk_event_nvmf.a 00:02:34.598 SYMLINK libspdk_event_nbd.so 00:02:34.598 SYMLINK libspdk_event_scsi.so 00:02:34.598 SYMLINK libspdk_event_ublk.so 00:02:34.598 SO libspdk_event_nvmf.so.6.0 00:02:34.858 SYMLINK libspdk_event_nvmf.so 00:02:35.117 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:35.117 CC module/event/subsystems/iscsi/iscsi.o 00:02:35.117 LIB libspdk_event_vhost_scsi.a 00:02:35.117 LIB libspdk_event_iscsi.a 00:02:35.117 SO libspdk_event_vhost_scsi.so.3.0 00:02:35.117 SO libspdk_event_iscsi.so.6.0 00:02:35.117 SYMLINK libspdk_event_vhost_scsi.so 00:02:35.376 SYMLINK libspdk_event_iscsi.so 00:02:35.376 SO libspdk.so.6.0 00:02:35.376 SYMLINK libspdk.so 00:02:35.636 CXX app/trace/trace.o 00:02:35.912 CC test/rpc_client/rpc_client_test.o 00:02:35.912 CC app/trace_record/trace_record.o 00:02:35.912 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.912 TEST_HEADER include/spdk/accel.h 00:02:35.912 CC app/spdk_nvme_perf/perf.o 00:02:35.912 TEST_HEADER include/spdk/assert.h 00:02:35.912 TEST_HEADER include/spdk/accel_module.h 00:02:35.912 CC app/spdk_top/spdk_top.o 00:02:35.912 TEST_HEADER include/spdk/barrier.h 00:02:35.912 TEST_HEADER include/spdk/base64.h 00:02:35.912 TEST_HEADER include/spdk/bdev.h 00:02:35.912 TEST_HEADER include/spdk/bdev_module.h 00:02:35.912 TEST_HEADER include/spdk/bit_array.h 00:02:35.912 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.912 TEST_HEADER include/spdk/bit_pool.h 00:02:35.912 CC app/spdk_nvme_identify/identify.o 00:02:35.912 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.912 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.912 TEST_HEADER include/spdk/blobfs.h 00:02:35.912 TEST_HEADER include/spdk/blob.h 00:02:35.912 CC app/spdk_lspci/spdk_lspci.o 00:02:35.912 TEST_HEADER include/spdk/conf.h 00:02:35.912 TEST_HEADER include/spdk/config.h 00:02:35.912 TEST_HEADER include/spdk/cpuset.h 00:02:35.912 TEST_HEADER include/spdk/crc16.h 00:02:35.912 TEST_HEADER include/spdk/crc32.h 00:02:35.912 TEST_HEADER include/spdk/crc64.h 00:02:35.912 TEST_HEADER include/spdk/dma.h 00:02:35.912 TEST_HEADER include/spdk/dif.h 00:02:35.912 TEST_HEADER include/spdk/endian.h 00:02:35.912 TEST_HEADER include/spdk/env.h 00:02:35.912 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.912 TEST_HEADER include/spdk/event.h 00:02:35.912 TEST_HEADER include/spdk/fd_group.h 00:02:35.912 TEST_HEADER include/spdk/fd.h 00:02:35.912 TEST_HEADER include/spdk/file.h 00:02:35.912 TEST_HEADER include/spdk/fsdev.h 00:02:35.912 TEST_HEADER include/spdk/ftl.h 00:02:35.912 TEST_HEADER include/spdk/fsdev_module.h 00:02:35.912 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:35.912 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.912 TEST_HEADER include/spdk/hexlify.h 00:02:35.912 TEST_HEADER include/spdk/histogram_data.h 00:02:35.912 TEST_HEADER include/spdk/init.h 00:02:35.912 TEST_HEADER include/spdk/idxd.h 00:02:35.912 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.912 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.912 TEST_HEADER include/spdk/ioat.h 00:02:35.912 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.912 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.912 TEST_HEADER include/spdk/json.h 00:02:35.912 TEST_HEADER include/spdk/keyring.h 00:02:35.912 TEST_HEADER include/spdk/keyring_module.h 00:02:35.912 TEST_HEADER include/spdk/likely.h 00:02:35.912 TEST_HEADER include/spdk/log.h 00:02:35.912 TEST_HEADER include/spdk/md5.h 00:02:35.912 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.912 TEST_HEADER include/spdk/memory.h 00:02:35.912 TEST_HEADER include/spdk/lvol.h 00:02:35.912 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.912 TEST_HEADER include/spdk/mmio.h 00:02:35.912 TEST_HEADER include/spdk/nbd.h 00:02:35.912 CC app/nvmf_tgt/nvmf_main.o 00:02:35.912 TEST_HEADER include/spdk/net.h 00:02:35.912 TEST_HEADER include/spdk/notify.h 00:02:35.912 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.912 TEST_HEADER include/spdk/nvme.h 00:02:35.912 CC app/spdk_dd/spdk_dd.o 00:02:35.912 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.912 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.912 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.912 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.912 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.912 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.912 TEST_HEADER include/spdk/nvmf.h 00:02:35.912 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.912 TEST_HEADER include/spdk/opal.h 00:02:35.912 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.912 TEST_HEADER include/spdk/pipe.h 00:02:35.912 TEST_HEADER include/spdk/opal_spec.h 00:02:35.912 TEST_HEADER include/spdk/pci_ids.h 00:02:35.912 TEST_HEADER include/spdk/queue.h 00:02:35.912 TEST_HEADER include/spdk/reduce.h 00:02:35.912 TEST_HEADER include/spdk/rpc.h 00:02:35.912 TEST_HEADER include/spdk/scsi.h 00:02:35.912 TEST_HEADER include/spdk/scheduler.h 00:02:35.912 TEST_HEADER include/spdk/sock.h 00:02:35.912 TEST_HEADER include/spdk/stdinc.h 00:02:35.912 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.912 TEST_HEADER include/spdk/thread.h 00:02:35.912 TEST_HEADER include/spdk/string.h 00:02:35.912 TEST_HEADER include/spdk/trace.h 00:02:35.912 TEST_HEADER include/spdk/trace_parser.h 00:02:35.912 TEST_HEADER include/spdk/tree.h 00:02:35.912 TEST_HEADER include/spdk/ublk.h 00:02:35.912 TEST_HEADER include/spdk/util.h 00:02:35.912 TEST_HEADER include/spdk/uuid.h 00:02:35.912 TEST_HEADER include/spdk/version.h 00:02:35.912 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.912 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.912 TEST_HEADER include/spdk/vhost.h 00:02:35.912 TEST_HEADER include/spdk/vmd.h 00:02:35.912 TEST_HEADER include/spdk/xor.h 00:02:35.912 TEST_HEADER include/spdk/zipf.h 00:02:35.912 CXX test/cpp_headers/accel.o 00:02:35.912 CXX test/cpp_headers/assert.o 00:02:35.912 CXX test/cpp_headers/accel_module.o 00:02:35.912 CXX test/cpp_headers/barrier.o 00:02:35.912 CC app/spdk_tgt/spdk_tgt.o 00:02:35.912 CXX test/cpp_headers/bdev.o 00:02:35.913 CXX test/cpp_headers/bdev_module.o 00:02:35.913 CXX test/cpp_headers/base64.o 00:02:35.913 CXX test/cpp_headers/bdev_zone.o 00:02:35.913 CXX test/cpp_headers/bit_array.o 00:02:35.913 CXX test/cpp_headers/bit_pool.o 00:02:35.913 CXX test/cpp_headers/conf.o 00:02:35.913 CXX test/cpp_headers/blob_bdev.o 00:02:35.913 CXX test/cpp_headers/blobfs_bdev.o 00:02:35.913 CXX test/cpp_headers/config.o 00:02:35.913 CXX test/cpp_headers/blob.o 00:02:35.913 CXX test/cpp_headers/blobfs.o 00:02:35.913 CXX test/cpp_headers/cpuset.o 00:02:35.913 CXX test/cpp_headers/crc64.o 00:02:35.913 CXX test/cpp_headers/crc32.o 00:02:35.913 CXX test/cpp_headers/crc16.o 00:02:35.913 CXX test/cpp_headers/endian.o 00:02:35.913 CXX test/cpp_headers/dma.o 00:02:35.913 CXX test/cpp_headers/dif.o 00:02:35.913 CXX test/cpp_headers/env_dpdk.o 00:02:35.913 CXX test/cpp_headers/fd_group.o 00:02:35.913 CXX test/cpp_headers/env.o 00:02:35.913 CXX test/cpp_headers/event.o 00:02:35.913 CXX test/cpp_headers/file.o 00:02:35.913 CXX test/cpp_headers/fd.o 00:02:35.913 CXX test/cpp_headers/fsdev.o 00:02:35.913 CXX test/cpp_headers/ftl.o 00:02:35.913 CXX test/cpp_headers/fsdev_module.o 00:02:35.913 CXX test/cpp_headers/fuse_dispatcher.o 00:02:35.913 CXX test/cpp_headers/gpt_spec.o 00:02:35.913 CXX test/cpp_headers/histogram_data.o 00:02:35.913 CXX test/cpp_headers/hexlify.o 00:02:35.913 CXX test/cpp_headers/idxd.o 00:02:35.913 CXX test/cpp_headers/init.o 00:02:35.913 CXX test/cpp_headers/idxd_spec.o 00:02:35.913 CXX test/cpp_headers/ioat.o 00:02:35.913 CXX test/cpp_headers/iscsi_spec.o 00:02:35.913 CXX test/cpp_headers/json.o 00:02:35.913 CXX test/cpp_headers/ioat_spec.o 00:02:35.913 CXX test/cpp_headers/jsonrpc.o 00:02:35.913 CXX test/cpp_headers/keyring.o 00:02:35.913 CXX test/cpp_headers/keyring_module.o 00:02:35.913 CXX test/cpp_headers/likely.o 00:02:35.913 CXX test/cpp_headers/log.o 00:02:35.913 CXX test/cpp_headers/lvol.o 00:02:35.913 CXX test/cpp_headers/md5.o 00:02:35.913 CXX test/cpp_headers/memory.o 00:02:35.913 CXX test/cpp_headers/mmio.o 00:02:35.913 CXX test/cpp_headers/nbd.o 00:02:35.913 CXX test/cpp_headers/net.o 00:02:35.913 CXX test/cpp_headers/notify.o 00:02:35.913 CXX test/cpp_headers/nvme.o 00:02:35.913 CXX test/cpp_headers/nvme_intel.o 00:02:35.913 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.913 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.913 CXX test/cpp_headers/nvme_spec.o 00:02:35.913 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.913 CXX test/cpp_headers/nvme_zns.o 00:02:35.913 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.913 CXX test/cpp_headers/nvmf.o 00:02:35.913 CXX test/cpp_headers/nvmf_spec.o 00:02:35.913 CXX test/cpp_headers/nvmf_transport.o 00:02:35.913 CXX test/cpp_headers/opal.o 00:02:35.913 CC examples/ioat/verify/verify.o 00:02:35.913 CC examples/ioat/perf/perf.o 00:02:35.913 CC test/app/jsoncat/jsoncat.o 00:02:35.913 CC test/app/histogram_perf/histogram_perf.o 00:02:35.913 CC test/env/pci/pci_ut.o 00:02:35.913 CC test/thread/poller_perf/poller_perf.o 00:02:35.913 CC test/env/memory/memory_ut.o 00:02:35.913 CC test/app/stub/stub.o 00:02:36.186 CC test/env/vtophys/vtophys.o 00:02:36.186 CC app/fio/nvme/fio_plugin.o 00:02:36.186 CC examples/util/zipf/zipf.o 00:02:36.186 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:36.186 CC test/app/bdev_svc/bdev_svc.o 00:02:36.186 CC app/fio/bdev/fio_plugin.o 00:02:36.186 CC test/dma/test_dma/test_dma.o 00:02:36.186 LINK spdk_lspci 00:02:36.187 LINK spdk_nvme_discover 00:02:36.453 LINK interrupt_tgt 00:02:36.453 LINK nvmf_tgt 00:02:36.453 LINK rpc_client_test 00:02:36.453 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:36.453 CC test/env/mem_callbacks/mem_callbacks.o 00:02:36.453 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.453 LINK spdk_trace_record 00:02:36.453 LINK histogram_perf 00:02:36.453 LINK poller_perf 00:02:36.453 CXX test/cpp_headers/opal_spec.o 00:02:36.453 LINK jsoncat 00:02:36.453 CXX test/cpp_headers/pci_ids.o 00:02:36.453 CXX test/cpp_headers/pipe.o 00:02:36.453 LINK iscsi_tgt 00:02:36.453 CXX test/cpp_headers/queue.o 00:02:36.453 LINK spdk_tgt 00:02:36.453 CXX test/cpp_headers/reduce.o 00:02:36.453 CXX test/cpp_headers/rpc.o 00:02:36.454 CXX test/cpp_headers/scheduler.o 00:02:36.454 CXX test/cpp_headers/scsi.o 00:02:36.712 CXX test/cpp_headers/scsi_spec.o 00:02:36.712 CXX test/cpp_headers/sock.o 00:02:36.712 CXX test/cpp_headers/stdinc.o 00:02:36.712 LINK ioat_perf 00:02:36.712 CXX test/cpp_headers/string.o 00:02:36.712 CXX test/cpp_headers/thread.o 00:02:36.712 CXX test/cpp_headers/trace.o 00:02:36.712 CXX test/cpp_headers/trace_parser.o 00:02:36.712 CXX test/cpp_headers/tree.o 00:02:36.712 CXX test/cpp_headers/ublk.o 00:02:36.712 CXX test/cpp_headers/uuid.o 00:02:36.712 CXX test/cpp_headers/util.o 00:02:36.712 CXX test/cpp_headers/version.o 00:02:36.712 CXX test/cpp_headers/vfio_user_pci.o 00:02:36.712 CXX test/cpp_headers/vfio_user_spec.o 00:02:36.712 CXX test/cpp_headers/vhost.o 00:02:36.712 CXX test/cpp_headers/vmd.o 00:02:36.712 CXX test/cpp_headers/xor.o 00:02:36.712 CXX test/cpp_headers/zipf.o 00:02:36.712 LINK verify 00:02:36.712 LINK vtophys 00:02:36.712 LINK zipf 00:02:36.712 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.712 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.712 LINK env_dpdk_post_init 00:02:36.712 LINK spdk_trace 00:02:36.712 LINK stub 00:02:36.712 LINK bdev_svc 00:02:36.712 LINK pci_ut 00:02:36.970 LINK spdk_dd 00:02:36.970 LINK nvme_fuzz 00:02:36.970 LINK spdk_bdev 00:02:36.970 CC app/vhost/vhost.o 00:02:36.970 CC test/event/event_perf/event_perf.o 00:02:36.970 CC test/event/reactor/reactor.o 00:02:36.970 CC test/event/reactor_perf/reactor_perf.o 00:02:36.970 CC test/event/app_repeat/app_repeat.o 00:02:36.970 LINK spdk_nvme_identify 00:02:36.970 CC test/event/scheduler/scheduler.o 00:02:37.228 CC examples/idxd/perf/perf.o 00:02:37.228 LINK spdk_nvme 00:02:37.228 CC examples/sock/hello_world/hello_sock.o 00:02:37.228 CC examples/vmd/led/led.o 00:02:37.228 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.228 LINK test_dma 00:02:37.228 CC examples/thread/thread/thread_ex.o 00:02:37.228 LINK vhost_fuzz 00:02:37.228 LINK event_perf 00:02:37.228 LINK spdk_nvme_perf 00:02:37.228 LINK reactor 00:02:37.228 LINK reactor_perf 00:02:37.228 LINK spdk_top 00:02:37.228 LINK app_repeat 00:02:37.228 LINK mem_callbacks 00:02:37.228 LINK vhost 00:02:37.228 LINK lsvmd 00:02:37.228 LINK led 00:02:37.228 LINK scheduler 00:02:37.228 LINK hello_sock 00:02:37.486 LINK idxd_perf 00:02:37.486 LINK thread 00:02:37.486 LINK memory_ut 00:02:37.745 CC test/nvme/startup/startup.o 00:02:37.745 CC test/nvme/aer/aer.o 00:02:37.745 CC test/nvme/err_injection/err_injection.o 00:02:37.745 CC test/nvme/connect_stress/connect_stress.o 00:02:37.745 CC test/nvme/reset/reset.o 00:02:37.745 CC test/nvme/sgl/sgl.o 00:02:37.745 CC test/nvme/boot_partition/boot_partition.o 00:02:37.745 CC test/nvme/overhead/overhead.o 00:02:37.745 CC test/nvme/cuse/cuse.o 00:02:37.745 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.745 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.745 CC test/nvme/simple_copy/simple_copy.o 00:02:37.745 CC test/nvme/compliance/nvme_compliance.o 00:02:37.745 CC test/nvme/reserve/reserve.o 00:02:37.745 CC test/nvme/e2edp/nvme_dp.o 00:02:37.745 CC test/nvme/fdp/fdp.o 00:02:37.745 CC test/accel/dif/dif.o 00:02:37.745 CC test/blobfs/mkfs/mkfs.o 00:02:37.745 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.745 CC examples/nvme/reconnect/reconnect.o 00:02:37.745 CC examples/nvme/arbitration/arbitration.o 00:02:37.745 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.745 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.745 CC examples/nvme/hello_world/hello_world.o 00:02:37.745 CC examples/nvme/hotplug/hotplug.o 00:02:37.745 CC examples/nvme/abort/abort.o 00:02:37.745 CC test/lvol/esnap/esnap.o 00:02:37.745 LINK startup 00:02:37.745 LINK connect_stress 00:02:38.004 LINK boot_partition 00:02:38.004 LINK doorbell_aers 00:02:38.004 LINK err_injection 00:02:38.004 LINK fused_ordering 00:02:38.004 CC examples/accel/perf/accel_perf.o 00:02:38.004 LINK reserve 00:02:38.004 LINK simple_copy 00:02:38.004 LINK mkfs 00:02:38.004 LINK reset 00:02:38.004 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:38.004 LINK aer 00:02:38.004 LINK sgl 00:02:38.004 LINK nvme_dp 00:02:38.004 CC examples/blob/cli/blobcli.o 00:02:38.004 CC examples/blob/hello_world/hello_blob.o 00:02:38.004 LINK overhead 00:02:38.004 LINK pmr_persistence 00:02:38.004 LINK fdp 00:02:38.004 LINK nvme_compliance 00:02:38.004 LINK cmb_copy 00:02:38.004 LINK hotplug 00:02:38.004 LINK hello_world 00:02:38.004 LINK iscsi_fuzz 00:02:38.004 LINK reconnect 00:02:38.263 LINK arbitration 00:02:38.263 LINK abort 00:02:38.263 LINK hello_blob 00:02:38.263 LINK nvme_manage 00:02:38.263 LINK hello_fsdev 00:02:38.263 LINK dif 00:02:38.263 LINK accel_perf 00:02:38.522 LINK blobcli 00:02:38.781 LINK cuse 00:02:38.781 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.781 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.781 CC test/bdev/bdevio/bdevio.o 00:02:39.044 LINK hello_bdev 00:02:39.044 LINK bdevio 00:02:39.303 LINK bdevperf 00:02:39.870 CC examples/nvmf/nvmf/nvmf.o 00:02:40.129 LINK nvmf 00:02:41.508 LINK esnap 00:02:41.508 00:02:41.508 real 0m55.325s 00:02:41.508 user 8m16.578s 00:02:41.508 sys 3m46.884s 00:02:41.508 09:41:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.508 09:41:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:41.508 ************************************ 00:02:41.508 END TEST make 00:02:41.508 ************************************ 00:02:41.508 09:41:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:41.508 09:41:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:41.508 09:41:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:41.508 09:41:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.508 09:41:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:41.508 09:41:15 -- pm/common@44 -- $ pid=2377994 00:02:41.508 09:41:15 -- pm/common@50 -- $ kill -TERM 2377994 00:02:41.508 09:41:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.508 09:41:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:41.508 09:41:15 -- pm/common@44 -- $ pid=2377995 00:02:41.508 09:41:15 -- pm/common@50 -- $ kill -TERM 2377995 00:02:41.508 09:41:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.508 09:41:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:41.508 09:41:15 -- pm/common@44 -- $ pid=2377997 00:02:41.508 09:41:15 -- pm/common@50 -- $ kill -TERM 2377997 00:02:41.508 09:41:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.508 09:41:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:41.508 09:41:15 -- pm/common@44 -- $ pid=2378022 00:02:41.508 09:41:15 -- pm/common@50 -- $ sudo -E kill -TERM 2378022 00:02:41.508 09:41:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:41.768 09:41:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:41.768 09:41:15 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:41.768 09:41:15 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:41.768 09:41:15 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:41.768 09:41:15 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:41.768 09:41:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:41.768 09:41:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:41.768 09:41:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:41.768 09:41:15 -- scripts/common.sh@336 -- # IFS=.-: 00:02:41.768 09:41:15 -- scripts/common.sh@336 -- # read -ra ver1 00:02:41.768 09:41:15 -- scripts/common.sh@337 -- # IFS=.-: 00:02:41.768 09:41:15 -- scripts/common.sh@337 -- # read -ra ver2 00:02:41.768 09:41:15 -- scripts/common.sh@338 -- # local 'op=<' 00:02:41.768 09:41:15 -- scripts/common.sh@340 -- # ver1_l=2 00:02:41.768 09:41:15 -- scripts/common.sh@341 -- # ver2_l=1 00:02:41.768 09:41:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:41.768 09:41:15 -- scripts/common.sh@344 -- # case "$op" in 00:02:41.768 09:41:15 -- scripts/common.sh@345 -- # : 1 00:02:41.768 09:41:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:41.768 09:41:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.768 09:41:15 -- scripts/common.sh@365 -- # decimal 1 00:02:41.768 09:41:15 -- scripts/common.sh@353 -- # local d=1 00:02:41.768 09:41:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:41.768 09:41:15 -- scripts/common.sh@355 -- # echo 1 00:02:41.768 09:41:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:41.768 09:41:15 -- scripts/common.sh@366 -- # decimal 2 00:02:41.768 09:41:15 -- scripts/common.sh@353 -- # local d=2 00:02:41.768 09:41:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:41.768 09:41:15 -- scripts/common.sh@355 -- # echo 2 00:02:41.768 09:41:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:41.768 09:41:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:41.768 09:41:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:41.768 09:41:15 -- scripts/common.sh@368 -- # return 0 00:02:41.768 09:41:15 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:41.768 09:41:15 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.768 --rc genhtml_branch_coverage=1 00:02:41.768 --rc genhtml_function_coverage=1 00:02:41.768 --rc genhtml_legend=1 00:02:41.768 --rc geninfo_all_blocks=1 00:02:41.768 --rc geninfo_unexecuted_blocks=1 00:02:41.768 00:02:41.768 ' 00:02:41.768 09:41:15 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.768 --rc genhtml_branch_coverage=1 00:02:41.768 --rc genhtml_function_coverage=1 00:02:41.768 --rc genhtml_legend=1 00:02:41.768 --rc geninfo_all_blocks=1 00:02:41.768 --rc geninfo_unexecuted_blocks=1 00:02:41.768 00:02:41.768 ' 00:02:41.768 09:41:15 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.768 --rc genhtml_branch_coverage=1 00:02:41.768 --rc genhtml_function_coverage=1 00:02:41.768 --rc genhtml_legend=1 00:02:41.768 --rc geninfo_all_blocks=1 00:02:41.768 --rc geninfo_unexecuted_blocks=1 00:02:41.768 00:02:41.768 ' 00:02:41.768 09:41:15 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.768 --rc genhtml_branch_coverage=1 00:02:41.768 --rc genhtml_function_coverage=1 00:02:41.768 --rc genhtml_legend=1 00:02:41.768 --rc geninfo_all_blocks=1 00:02:41.768 --rc geninfo_unexecuted_blocks=1 00:02:41.768 00:02:41.768 ' 00:02:41.768 09:41:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.768 09:41:15 -- nvmf/common.sh@7 -- # uname -s 00:02:41.768 09:41:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.768 09:41:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.768 09:41:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.768 09:41:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.768 09:41:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.768 09:41:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.768 09:41:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.768 09:41:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.768 09:41:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.768 09:41:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.768 09:41:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.768 09:41:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.768 09:41:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.768 09:41:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.768 09:41:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.768 09:41:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.768 09:41:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:41.768 09:41:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:41.768 09:41:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.768 09:41:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.768 09:41:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.768 09:41:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.768 09:41:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.768 09:41:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.768 09:41:15 -- paths/export.sh@5 -- # export PATH 00:02:41.768 09:41:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.768 09:41:15 -- nvmf/common.sh@51 -- # : 0 00:02:41.768 09:41:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:41.768 09:41:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:41.768 09:41:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.768 09:41:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.768 09:41:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.768 09:41:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:41.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:41.769 09:41:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:41.769 09:41:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:41.769 09:41:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:41.769 09:41:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.769 09:41:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.769 09:41:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.769 09:41:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.769 09:41:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.769 09:41:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.769 09:41:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.769 09:41:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.769 09:41:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.769 09:41:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.769 09:41:15 -- spdk/autotest.sh@48 -- # udevadm_pid=2440224 00:02:41.769 09:41:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.769 09:41:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.769 09:41:15 -- pm/common@17 -- # local monitor 00:02:41.769 09:41:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.769 09:41:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.769 09:41:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.769 09:41:15 -- pm/common@21 -- # date +%s 00:02:41.769 09:41:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.769 09:41:15 -- pm/common@21 -- # date +%s 00:02:41.769 09:41:15 -- pm/common@25 -- # sleep 1 00:02:41.769 09:41:15 -- pm/common@21 -- # date +%s 00:02:41.769 09:41:15 -- pm/common@21 -- # date +%s 00:02:41.769 09:41:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732092075 00:02:41.769 09:41:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732092075 00:02:41.769 09:41:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732092075 00:02:41.769 09:41:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732092075 00:02:42.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732092075_collect-vmstat.pm.log 00:02:42.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732092075_collect-cpu-load.pm.log 00:02:42.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732092075_collect-cpu-temp.pm.log 00:02:42.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732092075_collect-bmc-pm.bmc.pm.log 00:02:42.964 09:41:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.964 09:41:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.964 09:41:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.964 09:41:16 -- common/autotest_common.sh@10 -- # set +x 00:02:42.964 09:41:16 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.964 09:41:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:42.964 09:41:16 -- common/autotest_common.sh@10 -- # set +x 00:02:42.964 09:41:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.964 09:41:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.964 09:41:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.964 09:41:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.964 09:41:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.964 09:41:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.964 09:41:16 -- common/autotest_common.sh@1457 -- # uname 00:02:42.964 09:41:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:42.964 09:41:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.964 09:41:16 -- common/autotest_common.sh@1477 -- # uname 00:02:42.964 09:41:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:42.964 09:41:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:42.964 09:41:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:42.964 lcov: LCOV version 1.15 00:02:42.964 09:41:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:55.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.385 09:41:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:07.385 09:41:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:07.385 09:41:40 -- common/autotest_common.sh@10 -- # set +x 00:03:07.385 09:41:40 -- spdk/autotest.sh@78 -- # rm -f 00:03:07.385 09:41:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.674 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:10.674 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.674 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.674 09:41:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.674 09:41:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:10.674 09:41:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:10.674 09:41:44 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:10.674 09:41:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:10.674 09:41:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:10.674 09:41:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:10.675 09:41:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.675 09:41:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:10.675 09:41:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.675 09:41:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.675 09:41:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.675 09:41:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.675 09:41:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.675 09:41:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.675 No valid GPT data, bailing 00:03:10.675 09:41:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.675 09:41:44 -- scripts/common.sh@394 -- # pt= 00:03:10.675 09:41:44 -- scripts/common.sh@395 -- # return 1 00:03:10.675 09:41:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.675 1+0 records in 00:03:10.675 1+0 records out 00:03:10.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00209474 s, 501 MB/s 00:03:10.675 09:41:44 -- spdk/autotest.sh@105 -- # sync 00:03:10.675 09:41:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.675 09:41:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.675 09:41:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.244 09:41:49 -- spdk/autotest.sh@111 -- # uname -s 00:03:17.244 09:41:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:17.244 09:41:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:17.244 09:41:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.149 Hugepages 00:03:19.149 node hugesize free / total 00:03:19.149 node0 1048576kB 0 / 0 00:03:19.149 node0 2048kB 0 / 0 00:03:19.149 node1 1048576kB 0 / 0 00:03:19.149 node1 2048kB 0 / 0 00:03:19.149 00:03:19.150 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.150 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.150 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.150 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:19.150 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.150 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.150 09:41:52 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.150 09:41:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.150 09:41:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.150 09:41:52 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.439 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.439 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.376 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.635 09:41:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:24.572 09:41:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:24.572 09:41:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:24.572 09:41:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.572 09:41:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:24.572 09:41:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:24.572 09:41:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:24.572 09:41:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.572 09:41:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.572 09:41:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:24.830 09:41:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:24.830 09:41:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:24.830 09:41:58 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.520 Waiting for block devices as requested 00:03:27.520 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:27.520 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:27.779 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:27.779 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:27.779 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.040 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.040 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.040 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:28.300 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:28.300 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:28.300 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:28.300 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:28.559 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.559 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.559 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.818 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:28.818 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:28.818 09:42:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:28.818 09:42:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:28.818 09:42:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:28.818 09:42:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:28.818 09:42:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:28.818 09:42:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:28.818 09:42:02 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:28.818 09:42:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:28.818 09:42:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:28.818 09:42:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:28.818 09:42:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:28.818 09:42:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:28.818 09:42:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:28.818 09:42:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:28.818 09:42:02 -- common/autotest_common.sh@1543 -- # continue 00:03:28.818 09:42:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:28.818 09:42:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:28.818 09:42:02 -- common/autotest_common.sh@10 -- # set +x 00:03:29.077 09:42:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:29.077 09:42:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.077 09:42:02 -- common/autotest_common.sh@10 -- # set +x 00:03:29.077 09:42:02 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.364 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.364 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.301 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.560 09:42:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:33.560 09:42:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.560 09:42:06 -- common/autotest_common.sh@10 -- # set +x 00:03:33.560 09:42:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:33.560 09:42:06 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:33.560 09:42:06 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:33.560 09:42:06 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:33.560 09:42:06 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:33.560 09:42:06 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:33.560 09:42:06 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:33.560 09:42:06 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:33.560 09:42:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:33.560 09:42:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:33.560 09:42:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.560 09:42:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.560 09:42:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.560 09:42:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:33.560 09:42:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:33.560 09:42:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:33.560 09:42:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:33.560 09:42:07 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:33.560 09:42:07 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:33.560 09:42:07 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:33.560 09:42:07 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:33.560 09:42:07 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:33.560 09:42:07 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:33.560 09:42:07 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2454971 00:03:33.561 09:42:07 -- common/autotest_common.sh@1585 -- # waitforlisten 2454971 00:03:33.561 09:42:07 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.561 09:42:07 -- common/autotest_common.sh@835 -- # '[' -z 2454971 ']' 00:03:33.561 09:42:07 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.561 09:42:07 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.561 09:42:07 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.561 09:42:07 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.561 09:42:07 -- common/autotest_common.sh@10 -- # set +x 00:03:33.561 [2024-11-20 09:42:07.101281] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:03:33.561 [2024-11-20 09:42:07.101327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454971 ] 00:03:33.819 [2024-11-20 09:42:07.176280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.819 [2024-11-20 09:42:07.218181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.077 09:42:07 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.077 09:42:07 -- common/autotest_common.sh@868 -- # return 0 00:03:34.077 09:42:07 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:34.077 09:42:07 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:34.077 09:42:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:37.359 nvme0n1 00:03:37.359 09:42:10 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:37.359 [2024-11-20 09:42:10.613126] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:37.359 request: 00:03:37.359 { 00:03:37.359 "nvme_ctrlr_name": "nvme0", 00:03:37.359 "password": "test", 00:03:37.359 "method": "bdev_nvme_opal_revert", 00:03:37.359 "req_id": 1 00:03:37.359 } 00:03:37.359 Got JSON-RPC error response 00:03:37.359 response: 00:03:37.359 { 00:03:37.359 "code": -32602, 00:03:37.359 "message": "Invalid parameters" 00:03:37.359 } 00:03:37.359 09:42:10 -- common/autotest_common.sh@1591 -- # true 00:03:37.359 09:42:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:37.359 09:42:10 -- common/autotest_common.sh@1595 -- # killprocess 2454971 00:03:37.359 09:42:10 -- common/autotest_common.sh@954 -- # '[' -z 2454971 ']' 00:03:37.359 09:42:10 -- common/autotest_common.sh@958 -- # kill -0 2454971 00:03:37.359 09:42:10 -- common/autotest_common.sh@959 -- # uname 00:03:37.359 09:42:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:37.359 09:42:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2454971 00:03:37.359 09:42:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:37.359 09:42:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:37.359 09:42:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2454971' 00:03:37.359 killing process with pid 2454971 00:03:37.359 09:42:10 -- common/autotest_common.sh@973 -- # kill 2454971 00:03:37.359 09:42:10 -- common/autotest_common.sh@978 -- # wait 2454971 00:03:39.888 09:42:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:39.888 09:42:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:39.888 09:42:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.888 09:42:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.888 09:42:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:39.888 09:42:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.888 09:42:12 -- common/autotest_common.sh@10 -- # set +x 00:03:39.888 09:42:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:39.888 09:42:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.888 09:42:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.888 09:42:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.888 09:42:12 -- common/autotest_common.sh@10 -- # set +x 00:03:39.888 ************************************ 00:03:39.888 START TEST env 00:03:39.888 ************************************ 00:03:39.888 09:42:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.888 * Looking for test storage... 00:03:39.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:39.888 09:42:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.888 09:42:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.888 09:42:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.888 09:42:13 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.888 09:42:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.888 09:42:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.888 09:42:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.888 09:42:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.888 09:42:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.888 09:42:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.888 09:42:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.888 09:42:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.888 09:42:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.888 09:42:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.888 09:42:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.888 09:42:13 env -- scripts/common.sh@344 -- # case "$op" in 00:03:39.889 09:42:13 env -- scripts/common.sh@345 -- # : 1 00:03:39.889 09:42:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.889 09:42:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.889 09:42:13 env -- scripts/common.sh@365 -- # decimal 1 00:03:39.889 09:42:13 env -- scripts/common.sh@353 -- # local d=1 00:03:39.889 09:42:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.889 09:42:13 env -- scripts/common.sh@355 -- # echo 1 00:03:39.889 09:42:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.889 09:42:13 env -- scripts/common.sh@366 -- # decimal 2 00:03:39.889 09:42:13 env -- scripts/common.sh@353 -- # local d=2 00:03:39.889 09:42:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.889 09:42:13 env -- scripts/common.sh@355 -- # echo 2 00:03:39.889 09:42:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.889 09:42:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.889 09:42:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.889 09:42:13 env -- scripts/common.sh@368 -- # return 0 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:39.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.889 --rc genhtml_branch_coverage=1 00:03:39.889 --rc genhtml_function_coverage=1 00:03:39.889 --rc genhtml_legend=1 00:03:39.889 --rc geninfo_all_blocks=1 00:03:39.889 --rc geninfo_unexecuted_blocks=1 00:03:39.889 00:03:39.889 ' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:39.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.889 --rc genhtml_branch_coverage=1 00:03:39.889 --rc genhtml_function_coverage=1 00:03:39.889 --rc genhtml_legend=1 00:03:39.889 --rc geninfo_all_blocks=1 00:03:39.889 --rc geninfo_unexecuted_blocks=1 00:03:39.889 00:03:39.889 ' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:39.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.889 --rc genhtml_branch_coverage=1 00:03:39.889 --rc genhtml_function_coverage=1 00:03:39.889 --rc genhtml_legend=1 00:03:39.889 --rc geninfo_all_blocks=1 00:03:39.889 --rc geninfo_unexecuted_blocks=1 00:03:39.889 00:03:39.889 ' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:39.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.889 --rc genhtml_branch_coverage=1 00:03:39.889 --rc genhtml_function_coverage=1 00:03:39.889 --rc genhtml_legend=1 00:03:39.889 --rc geninfo_all_blocks=1 00:03:39.889 --rc geninfo_unexecuted_blocks=1 00:03:39.889 00:03:39.889 ' 00:03:39.889 09:42:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.889 09:42:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.889 ************************************ 00:03:39.889 START TEST env_memory 00:03:39.889 ************************************ 00:03:39.889 09:42:13 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.889 00:03:39.889 00:03:39.889 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.889 http://cunit.sourceforge.net/ 00:03:39.889 00:03:39.889 00:03:39.889 Suite: memory 00:03:39.889 Test: alloc and free memory map ...[2024-11-20 09:42:13.132309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:39.889 passed 00:03:39.889 Test: mem map translation ...[2024-11-20 09:42:13.149780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:39.889 [2024-11-20 09:42:13.149792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:39.889 [2024-11-20 09:42:13.149825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:39.889 [2024-11-20 09:42:13.149830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:39.889 passed 00:03:39.889 Test: mem map registration ...[2024-11-20 09:42:13.185370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:39.889 [2024-11-20 09:42:13.185381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:39.889 passed 00:03:39.889 Test: mem map adjacent registrations ...passed 00:03:39.889 00:03:39.889 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.889 suites 1 1 n/a 0 0 00:03:39.889 tests 4 4 4 0 0 00:03:39.889 asserts 152 152 152 0 n/a 00:03:39.889 00:03:39.889 Elapsed time = 0.133 seconds 00:03:39.889 00:03:39.889 real 0m0.146s 00:03:39.889 user 0m0.137s 00:03:39.889 sys 0m0.008s 00:03:39.889 09:42:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.889 09:42:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:39.889 ************************************ 00:03:39.889 END TEST env_memory 00:03:39.889 ************************************ 00:03:39.889 09:42:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.889 09:42:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.889 09:42:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.889 ************************************ 00:03:39.889 START TEST env_vtophys 00:03:39.889 ************************************ 00:03:39.889 09:42:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.889 EAL: lib.eal log level changed from notice to debug 00:03:39.889 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.889 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.889 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.889 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.889 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.889 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.889 EAL: Detected lcore 6 as core 6 on socket 0 00:03:39.889 EAL: Detected lcore 7 as core 8 on socket 0 00:03:39.889 EAL: Detected lcore 8 as core 9 on socket 0 00:03:39.889 EAL: Detected lcore 9 as core 10 on socket 0 00:03:39.889 EAL: Detected lcore 10 as core 11 on socket 0 00:03:39.889 EAL: Detected lcore 11 as core 12 on socket 0 00:03:39.889 EAL: Detected lcore 12 as core 13 on socket 0 00:03:39.889 EAL: Detected lcore 13 as core 16 on socket 0 00:03:39.889 EAL: Detected lcore 14 as core 17 on socket 0 00:03:39.889 EAL: Detected lcore 15 as core 18 on socket 0 00:03:39.889 EAL: Detected lcore 16 as core 19 on socket 0 00:03:39.889 EAL: Detected lcore 17 as core 20 on socket 0 00:03:39.889 EAL: Detected lcore 18 as core 21 on socket 0 00:03:39.889 EAL: Detected lcore 19 as core 25 on socket 0 00:03:39.889 EAL: Detected lcore 20 as core 26 on socket 0 00:03:39.889 EAL: Detected lcore 21 as core 27 on socket 0 00:03:39.889 EAL: Detected lcore 22 as core 28 on socket 0 00:03:39.889 EAL: Detected lcore 23 as core 29 on socket 0 00:03:39.889 EAL: Detected lcore 24 as core 0 on socket 1 00:03:39.889 EAL: Detected lcore 25 as core 1 on socket 1 00:03:39.889 EAL: Detected lcore 26 as core 2 on socket 1 00:03:39.889 EAL: Detected lcore 27 as core 3 on socket 1 00:03:39.889 EAL: Detected lcore 28 as core 4 on socket 1 00:03:39.889 EAL: Detected lcore 29 as core 5 on socket 1 00:03:39.889 EAL: Detected lcore 30 as core 6 on socket 1 00:03:39.889 EAL: Detected lcore 31 as core 8 on socket 1 00:03:39.889 EAL: Detected lcore 32 as core 10 on socket 1 00:03:39.889 EAL: Detected lcore 33 as core 11 on socket 1 00:03:39.889 EAL: Detected lcore 34 as core 12 on socket 1 00:03:39.889 EAL: Detected lcore 35 as core 13 on socket 1 00:03:39.889 EAL: Detected lcore 36 as core 16 on socket 1 00:03:39.889 EAL: Detected lcore 37 as core 17 on socket 1 00:03:39.889 EAL: Detected lcore 38 as core 18 on socket 1 00:03:39.889 EAL: Detected lcore 39 as core 19 on socket 1 00:03:39.889 EAL: Detected lcore 40 as core 20 on socket 1 00:03:39.889 EAL: Detected lcore 41 as core 21 on socket 1 00:03:39.889 EAL: Detected lcore 42 as core 24 on socket 1 00:03:39.889 EAL: Detected lcore 43 as core 25 on socket 1 00:03:39.889 EAL: Detected lcore 44 as core 26 on socket 1 00:03:39.889 EAL: Detected lcore 45 as core 27 on socket 1 00:03:39.889 EAL: Detected lcore 46 as core 28 on socket 1 00:03:39.889 EAL: Detected lcore 47 as core 29 on socket 1 00:03:39.889 EAL: Detected lcore 48 as core 0 on socket 0 00:03:39.889 EAL: Detected lcore 49 as core 1 on socket 0 00:03:39.889 EAL: Detected lcore 50 as core 2 on socket 0 00:03:39.889 EAL: Detected lcore 51 as core 3 on socket 0 00:03:39.889 EAL: Detected lcore 52 as core 4 on socket 0 00:03:39.889 EAL: Detected lcore 53 as core 5 on socket 0 00:03:39.890 EAL: Detected lcore 54 as core 6 on socket 0 00:03:39.890 EAL: Detected lcore 55 as core 8 on socket 0 00:03:39.890 EAL: Detected lcore 56 as core 9 on socket 0 00:03:39.890 EAL: Detected lcore 57 as core 10 on socket 0 00:03:39.890 EAL: Detected lcore 58 as core 11 on socket 0 00:03:39.890 EAL: Detected lcore 59 as core 12 on socket 0 00:03:39.890 EAL: Detected lcore 60 as core 13 on socket 0 00:03:39.890 EAL: Detected lcore 61 as core 16 on socket 0 00:03:39.890 EAL: Detected lcore 62 as core 17 on socket 0 00:03:39.890 EAL: Detected lcore 63 as core 18 on socket 0 00:03:39.890 EAL: Detected lcore 64 as core 19 on socket 0 00:03:39.890 EAL: Detected lcore 65 as core 20 on socket 0 00:03:39.890 EAL: Detected lcore 66 as core 21 on socket 0 00:03:39.890 EAL: Detected lcore 67 as core 25 on socket 0 00:03:39.890 EAL: Detected lcore 68 as core 26 on socket 0 00:03:39.890 EAL: Detected lcore 69 as core 27 on socket 0 00:03:39.890 EAL: Detected lcore 70 as core 28 on socket 0 00:03:39.890 EAL: Detected lcore 71 as core 29 on socket 0 00:03:39.890 EAL: Detected lcore 72 as core 0 on socket 1 00:03:39.890 EAL: Detected lcore 73 as core 1 on socket 1 00:03:39.890 EAL: Detected lcore 74 as core 2 on socket 1 00:03:39.890 EAL: Detected lcore 75 as core 3 on socket 1 00:03:39.890 EAL: Detected lcore 76 as core 4 on socket 1 00:03:39.890 EAL: Detected lcore 77 as core 5 on socket 1 00:03:39.890 EAL: Detected lcore 78 as core 6 on socket 1 00:03:39.890 EAL: Detected lcore 79 as core 8 on socket 1 00:03:39.890 EAL: Detected lcore 80 as core 10 on socket 1 00:03:39.890 EAL: Detected lcore 81 as core 11 on socket 1 00:03:39.890 EAL: Detected lcore 82 as core 12 on socket 1 00:03:39.890 EAL: Detected lcore 83 as core 13 on socket 1 00:03:39.890 EAL: Detected lcore 84 as core 16 on socket 1 00:03:39.890 EAL: Detected lcore 85 as core 17 on socket 1 00:03:39.890 EAL: Detected lcore 86 as core 18 on socket 1 00:03:39.890 EAL: Detected lcore 87 as core 19 on socket 1 00:03:39.890 EAL: Detected lcore 88 as core 20 on socket 1 00:03:39.890 EAL: Detected lcore 89 as core 21 on socket 1 00:03:39.890 EAL: Detected lcore 90 as core 24 on socket 1 00:03:39.890 EAL: Detected lcore 91 as core 25 on socket 1 00:03:39.890 EAL: Detected lcore 92 as core 26 on socket 1 00:03:39.890 EAL: Detected lcore 93 as core 27 on socket 1 00:03:39.890 EAL: Detected lcore 94 as core 28 on socket 1 00:03:39.890 EAL: Detected lcore 95 as core 29 on socket 1 00:03:39.890 EAL: Maximum logical cores by configuration: 128 00:03:39.890 EAL: Detected CPU lcores: 96 00:03:39.890 EAL: Detected NUMA nodes: 2 00:03:39.890 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.890 EAL: Detected shared linkage of DPDK 00:03:39.890 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.890 EAL: Bus pci wants IOVA as 'DC' 00:03:39.890 EAL: Buses did not request a specific IOVA mode. 00:03:39.890 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.890 EAL: Selected IOVA mode 'VA' 00:03:39.890 EAL: Probing VFIO support... 00:03:39.890 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.890 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.890 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.890 EAL: VFIO support initialized 00:03:39.890 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.890 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.890 EAL: Setting up physically contiguous memory... 00:03:39.890 EAL: Setting maximum number of open files to 524288 00:03:39.890 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.890 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.890 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.890 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.890 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.890 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.890 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.890 EAL: Hugepages will be freed exactly as allocated. 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: TSC frequency is ~2100000 KHz 00:03:39.890 EAL: Main lcore 0 is ready (tid=7f07e0478a00;cpuset=[0]) 00:03:39.890 EAL: Trying to obtain current memory policy. 00:03:39.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.890 EAL: Restoring previous memory policy: 0 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:39.890 EAL: Mem event callback 'spdk:(nil)' registered 00:03:39.890 00:03:39.890 00:03:39.890 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.890 http://cunit.sourceforge.net/ 00:03:39.890 00:03:39.890 00:03:39.890 Suite: components_suite 00:03:39.890 Test: vtophys_malloc_test ...passed 00:03:39.890 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:39.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.890 EAL: Restoring previous memory policy: 4 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was expanded by 4MB 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was shrunk by 4MB 00:03:39.890 EAL: Trying to obtain current memory policy. 00:03:39.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.890 EAL: Restoring previous memory policy: 4 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was expanded by 6MB 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was shrunk by 6MB 00:03:39.890 EAL: Trying to obtain current memory policy. 00:03:39.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.890 EAL: Restoring previous memory policy: 4 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.890 EAL: request: mp_malloc_sync 00:03:39.890 EAL: No shared files mode enabled, IPC is disabled 00:03:39.890 EAL: Heap on socket 0 was expanded by 10MB 00:03:39.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was shrunk by 10MB 00:03:39.891 EAL: Trying to obtain current memory policy. 00:03:39.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.891 EAL: Restoring previous memory policy: 4 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was expanded by 18MB 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was shrunk by 18MB 00:03:39.891 EAL: Trying to obtain current memory policy. 00:03:39.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.891 EAL: Restoring previous memory policy: 4 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was expanded by 34MB 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was shrunk by 34MB 00:03:39.891 EAL: Trying to obtain current memory policy. 00:03:39.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.891 EAL: Restoring previous memory policy: 4 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was expanded by 66MB 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was shrunk by 66MB 00:03:39.891 EAL: Trying to obtain current memory policy. 00:03:39.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.891 EAL: Restoring previous memory policy: 4 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.891 EAL: request: mp_malloc_sync 00:03:39.891 EAL: No shared files mode enabled, IPC is disabled 00:03:39.891 EAL: Heap on socket 0 was expanded by 130MB 00:03:39.891 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.149 EAL: request: mp_malloc_sync 00:03:40.149 EAL: No shared files mode enabled, IPC is disabled 00:03:40.149 EAL: Heap on socket 0 was shrunk by 130MB 00:03:40.149 EAL: Trying to obtain current memory policy. 00:03:40.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.149 EAL: Restoring previous memory policy: 4 00:03:40.149 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.149 EAL: request: mp_malloc_sync 00:03:40.149 EAL: No shared files mode enabled, IPC is disabled 00:03:40.149 EAL: Heap on socket 0 was expanded by 258MB 00:03:40.149 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.149 EAL: request: mp_malloc_sync 00:03:40.149 EAL: No shared files mode enabled, IPC is disabled 00:03:40.149 EAL: Heap on socket 0 was shrunk by 258MB 00:03:40.149 EAL: Trying to obtain current memory policy. 00:03:40.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.149 EAL: Restoring previous memory policy: 4 00:03:40.149 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.149 EAL: request: mp_malloc_sync 00:03:40.149 EAL: No shared files mode enabled, IPC is disabled 00:03:40.149 EAL: Heap on socket 0 was expanded by 514MB 00:03:40.408 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.408 EAL: request: mp_malloc_sync 00:03:40.408 EAL: No shared files mode enabled, IPC is disabled 00:03:40.408 EAL: Heap on socket 0 was shrunk by 514MB 00:03:40.408 EAL: Trying to obtain current memory policy. 00:03:40.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.667 EAL: Restoring previous memory policy: 4 00:03:40.667 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.667 EAL: request: mp_malloc_sync 00:03:40.667 EAL: No shared files mode enabled, IPC is disabled 00:03:40.667 EAL: Heap on socket 0 was expanded by 1026MB 00:03:40.667 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.925 EAL: request: mp_malloc_sync 00:03:40.925 EAL: No shared files mode enabled, IPC is disabled 00:03:40.925 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.925 passed 00:03:40.925 00:03:40.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.925 suites 1 1 n/a 0 0 00:03:40.925 tests 2 2 2 0 0 00:03:40.925 asserts 497 497 497 0 n/a 00:03:40.925 00:03:40.925 Elapsed time = 0.968 seconds 00:03:40.925 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.925 EAL: request: mp_malloc_sync 00:03:40.925 EAL: No shared files mode enabled, IPC is disabled 00:03:40.925 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.925 EAL: No shared files mode enabled, IPC is disabled 00:03:40.925 EAL: No shared files mode enabled, IPC is disabled 00:03:40.925 EAL: No shared files mode enabled, IPC is disabled 00:03:40.925 00:03:40.925 real 0m1.079s 00:03:40.925 user 0m0.634s 00:03:40.925 sys 0m0.416s 00:03:40.925 09:42:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.925 09:42:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.925 ************************************ 00:03:40.925 END TEST env_vtophys 00:03:40.925 ************************************ 00:03:40.925 09:42:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.925 09:42:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.925 09:42:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.925 09:42:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.925 ************************************ 00:03:40.925 START TEST env_pci 00:03:40.925 ************************************ 00:03:40.925 09:42:14 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.925 00:03:40.925 00:03:40.925 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.925 http://cunit.sourceforge.net/ 00:03:40.925 00:03:40.925 00:03:40.925 Suite: pci 00:03:40.925 Test: pci_hook ...[2024-11-20 09:42:14.463657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2456297 has claimed it 00:03:40.925 EAL: Cannot find device (10000:00:01.0) 00:03:40.925 EAL: Failed to attach device on primary process 00:03:40.925 passed 00:03:40.925 00:03:40.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.925 suites 1 1 n/a 0 0 00:03:40.925 tests 1 1 1 0 0 00:03:40.925 asserts 25 25 25 0 n/a 00:03:40.925 00:03:40.925 Elapsed time = 0.029 seconds 00:03:40.925 00:03:40.925 real 0m0.049s 00:03:40.925 user 0m0.017s 00:03:40.925 sys 0m0.031s 00:03:40.925 09:42:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.925 09:42:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:40.925 ************************************ 00:03:40.925 END TEST env_pci 00:03:40.925 ************************************ 00:03:41.184 09:42:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:41.184 09:42:14 env -- env/env.sh@15 -- # uname 00:03:41.184 09:42:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:41.184 09:42:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:41.184 09:42:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.184 09:42:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:41.184 09:42:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.184 09:42:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.184 ************************************ 00:03:41.184 START TEST env_dpdk_post_init 00:03:41.184 ************************************ 00:03:41.184 09:42:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.184 EAL: Detected CPU lcores: 96 00:03:41.184 EAL: Detected NUMA nodes: 2 00:03:41.184 EAL: Detected shared linkage of DPDK 00:03:41.184 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.184 EAL: Selected IOVA mode 'VA' 00:03:41.184 EAL: VFIO support initialized 00:03:41.184 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.184 EAL: Using IOMMU type 1 (Type 1) 00:03:41.184 EAL: Ignore mapping IO port bar(1) 00:03:41.184 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:41.184 EAL: Ignore mapping IO port bar(1) 00:03:41.184 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:41.184 EAL: Ignore mapping IO port bar(1) 00:03:41.184 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:41.184 EAL: Ignore mapping IO port bar(1) 00:03:41.185 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:41.443 EAL: Ignore mapping IO port bar(1) 00:03:41.443 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:41.443 EAL: Ignore mapping IO port bar(1) 00:03:41.443 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:41.443 EAL: Ignore mapping IO port bar(1) 00:03:41.443 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:41.443 EAL: Ignore mapping IO port bar(1) 00:03:41.443 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:42.011 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:42.011 EAL: Ignore mapping IO port bar(1) 00:03:42.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:42.011 EAL: Ignore mapping IO port bar(1) 00:03:42.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:42.011 EAL: Ignore mapping IO port bar(1) 00:03:42.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:42.269 EAL: Ignore mapping IO port bar(1) 00:03:42.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:42.270 EAL: Ignore mapping IO port bar(1) 00:03:42.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:42.270 EAL: Ignore mapping IO port bar(1) 00:03:42.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:42.270 EAL: Ignore mapping IO port bar(1) 00:03:42.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:42.270 EAL: Ignore mapping IO port bar(1) 00:03:42.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:46.452 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:46.452 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:46.452 Starting DPDK initialization... 00:03:46.452 Starting SPDK post initialization... 00:03:46.452 SPDK NVMe probe 00:03:46.452 Attaching to 0000:5e:00.0 00:03:46.452 Attached to 0000:5e:00.0 00:03:46.452 Cleaning up... 00:03:46.452 00:03:46.452 real 0m4.949s 00:03:46.452 user 0m3.503s 00:03:46.452 sys 0m0.512s 00:03:46.452 09:42:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.452 09:42:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.452 ************************************ 00:03:46.452 END TEST env_dpdk_post_init 00:03:46.452 ************************************ 00:03:46.452 09:42:19 env -- env/env.sh@26 -- # uname 00:03:46.452 09:42:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.452 09:42:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.452 09:42:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.452 09:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.452 09:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.452 ************************************ 00:03:46.452 START TEST env_mem_callbacks 00:03:46.452 ************************************ 00:03:46.452 09:42:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.452 EAL: Detected CPU lcores: 96 00:03:46.452 EAL: Detected NUMA nodes: 2 00:03:46.452 EAL: Detected shared linkage of DPDK 00:03:46.452 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.452 EAL: Selected IOVA mode 'VA' 00:03:46.452 EAL: VFIO support initialized 00:03:46.452 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.452 00:03:46.452 00:03:46.452 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.452 http://cunit.sourceforge.net/ 00:03:46.452 00:03:46.452 00:03:46.452 Suite: memory 00:03:46.452 Test: test ... 00:03:46.452 register 0x200000200000 2097152 00:03:46.452 malloc 3145728 00:03:46.452 register 0x200000400000 4194304 00:03:46.452 buf 0x200000500000 len 3145728 PASSED 00:03:46.452 malloc 64 00:03:46.452 buf 0x2000004fff40 len 64 PASSED 00:03:46.452 malloc 4194304 00:03:46.452 register 0x200000800000 6291456 00:03:46.452 buf 0x200000a00000 len 4194304 PASSED 00:03:46.452 free 0x200000500000 3145728 00:03:46.452 free 0x2000004fff40 64 00:03:46.452 unregister 0x200000400000 4194304 PASSED 00:03:46.452 free 0x200000a00000 4194304 00:03:46.452 unregister 0x200000800000 6291456 PASSED 00:03:46.452 malloc 8388608 00:03:46.452 register 0x200000400000 10485760 00:03:46.452 buf 0x200000600000 len 8388608 PASSED 00:03:46.452 free 0x200000600000 8388608 00:03:46.452 unregister 0x200000400000 10485760 PASSED 00:03:46.452 passed 00:03:46.452 00:03:46.452 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.452 suites 1 1 n/a 0 0 00:03:46.452 tests 1 1 1 0 0 00:03:46.452 asserts 15 15 15 0 n/a 00:03:46.452 00:03:46.452 Elapsed time = 0.008 seconds 00:03:46.452 00:03:46.452 real 0m0.057s 00:03:46.452 user 0m0.018s 00:03:46.452 sys 0m0.039s 00:03:46.452 09:42:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.452 09:42:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.452 ************************************ 00:03:46.452 END TEST env_mem_callbacks 00:03:46.452 ************************************ 00:03:46.452 00:03:46.452 real 0m6.808s 00:03:46.452 user 0m4.546s 00:03:46.452 sys 0m1.336s 00:03:46.452 09:42:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.452 09:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.452 ************************************ 00:03:46.452 END TEST env 00:03:46.452 ************************************ 00:03:46.452 09:42:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.452 09:42:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.452 09:42:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.452 09:42:19 -- common/autotest_common.sh@10 -- # set +x 00:03:46.452 ************************************ 00:03:46.452 START TEST rpc 00:03:46.452 ************************************ 00:03:46.452 09:42:19 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.452 * Looking for test storage... 00:03:46.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.452 09:42:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:46.452 09:42:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:46.452 09:42:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:46.452 09:42:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:46.452 09:42:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.452 09:42:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.452 09:42:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.452 09:42:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.452 09:42:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.453 09:42:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.453 09:42:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.453 09:42:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.453 09:42:19 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.453 09:42:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.453 09:42:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.453 09:42:19 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.453 09:42:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.453 09:42:19 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.453 09:42:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.453 09:42:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.453 09:42:19 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.453 --rc genhtml_branch_coverage=1 00:03:46.453 --rc genhtml_function_coverage=1 00:03:46.453 --rc genhtml_legend=1 00:03:46.453 --rc geninfo_all_blocks=1 00:03:46.453 --rc geninfo_unexecuted_blocks=1 00:03:46.453 00:03:46.453 ' 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.453 --rc genhtml_branch_coverage=1 00:03:46.453 --rc genhtml_function_coverage=1 00:03:46.453 --rc genhtml_legend=1 00:03:46.453 --rc geninfo_all_blocks=1 00:03:46.453 --rc geninfo_unexecuted_blocks=1 00:03:46.453 00:03:46.453 ' 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.453 --rc genhtml_branch_coverage=1 00:03:46.453 --rc genhtml_function_coverage=1 00:03:46.453 --rc genhtml_legend=1 00:03:46.453 --rc geninfo_all_blocks=1 00:03:46.453 --rc geninfo_unexecuted_blocks=1 00:03:46.453 00:03:46.453 ' 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.453 --rc genhtml_branch_coverage=1 00:03:46.453 --rc genhtml_function_coverage=1 00:03:46.453 --rc genhtml_legend=1 00:03:46.453 --rc geninfo_all_blocks=1 00:03:46.453 --rc geninfo_unexecuted_blocks=1 00:03:46.453 00:03:46.453 ' 00:03:46.453 09:42:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2457345 00:03:46.453 09:42:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.453 09:42:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:46.453 09:42:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2457345 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 2457345 ']' 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.453 09:42:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.453 [2024-11-20 09:42:19.992842] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:03:46.453 [2024-11-20 09:42:19.992889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457345 ] 00:03:46.711 [2024-11-20 09:42:20.068148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.711 [2024-11-20 09:42:20.109897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.711 [2024-11-20 09:42:20.109938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2457345' to capture a snapshot of events at runtime. 00:03:46.711 [2024-11-20 09:42:20.109947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:46.711 [2024-11-20 09:42:20.109953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:46.711 [2024-11-20 09:42:20.109958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2457345 for offline analysis/debug. 00:03:46.711 [2024-11-20 09:42:20.110525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.276 09:42:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.276 09:42:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:47.276 09:42:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.276 09:42:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.276 09:42:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.276 09:42:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.276 09:42:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.276 09:42:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.276 09:42:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 ************************************ 00:03:47.535 START TEST rpc_integrity 00:03:47.535 ************************************ 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.535 { 00:03:47.535 "name": "Malloc0", 00:03:47.535 "aliases": [ 00:03:47.535 "2ada92d2-f9e3-491c-9b56-b90264a9049c" 00:03:47.535 ], 00:03:47.535 "product_name": "Malloc disk", 00:03:47.535 "block_size": 512, 00:03:47.535 "num_blocks": 16384, 00:03:47.535 "uuid": "2ada92d2-f9e3-491c-9b56-b90264a9049c", 00:03:47.535 "assigned_rate_limits": { 00:03:47.535 "rw_ios_per_sec": 0, 00:03:47.535 "rw_mbytes_per_sec": 0, 00:03:47.535 "r_mbytes_per_sec": 0, 00:03:47.535 "w_mbytes_per_sec": 0 00:03:47.535 }, 00:03:47.535 "claimed": false, 00:03:47.535 "zoned": false, 00:03:47.535 "supported_io_types": { 00:03:47.535 "read": true, 00:03:47.535 "write": true, 00:03:47.535 "unmap": true, 00:03:47.535 "flush": true, 00:03:47.535 "reset": true, 00:03:47.535 "nvme_admin": false, 00:03:47.535 "nvme_io": false, 00:03:47.535 "nvme_io_md": false, 00:03:47.535 "write_zeroes": true, 00:03:47.535 "zcopy": true, 00:03:47.535 "get_zone_info": false, 00:03:47.535 "zone_management": false, 00:03:47.535 "zone_append": false, 00:03:47.535 "compare": false, 00:03:47.535 "compare_and_write": false, 00:03:47.535 "abort": true, 00:03:47.535 "seek_hole": false, 00:03:47.535 "seek_data": false, 00:03:47.535 "copy": true, 00:03:47.535 "nvme_iov_md": false 00:03:47.535 }, 00:03:47.535 "memory_domains": [ 00:03:47.535 { 00:03:47.535 "dma_device_id": "system", 00:03:47.535 "dma_device_type": 1 00:03:47.535 }, 00:03:47.535 { 00:03:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.535 "dma_device_type": 2 00:03:47.535 } 00:03:47.535 ], 00:03:47.535 "driver_specific": {} 00:03:47.535 } 00:03:47.535 ]' 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 [2024-11-20 09:42:20.992185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.535 [2024-11-20 09:42:20.992217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.535 [2024-11-20 09:42:20.992230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16826e0 00:03:47.535 [2024-11-20 09:42:20.992236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.535 [2024-11-20 09:42:20.993329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.535 [2024-11-20 09:42:20.993350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.535 Passthru0 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.535 09:42:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.535 09:42:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.535 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.535 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.535 { 00:03:47.535 "name": "Malloc0", 00:03:47.535 "aliases": [ 00:03:47.535 "2ada92d2-f9e3-491c-9b56-b90264a9049c" 00:03:47.535 ], 00:03:47.535 "product_name": "Malloc disk", 00:03:47.535 "block_size": 512, 00:03:47.535 "num_blocks": 16384, 00:03:47.535 "uuid": "2ada92d2-f9e3-491c-9b56-b90264a9049c", 00:03:47.535 "assigned_rate_limits": { 00:03:47.535 "rw_ios_per_sec": 0, 00:03:47.535 "rw_mbytes_per_sec": 0, 00:03:47.535 "r_mbytes_per_sec": 0, 00:03:47.535 "w_mbytes_per_sec": 0 00:03:47.535 }, 00:03:47.535 "claimed": true, 00:03:47.535 "claim_type": "exclusive_write", 00:03:47.535 "zoned": false, 00:03:47.535 "supported_io_types": { 00:03:47.535 "read": true, 00:03:47.535 "write": true, 00:03:47.535 "unmap": true, 00:03:47.535 "flush": true, 00:03:47.535 "reset": true, 00:03:47.535 "nvme_admin": false, 00:03:47.535 "nvme_io": false, 00:03:47.535 "nvme_io_md": false, 00:03:47.535 "write_zeroes": true, 00:03:47.535 "zcopy": true, 00:03:47.535 "get_zone_info": false, 00:03:47.535 "zone_management": false, 00:03:47.535 "zone_append": false, 00:03:47.535 "compare": false, 00:03:47.535 "compare_and_write": false, 00:03:47.535 "abort": true, 00:03:47.535 "seek_hole": false, 00:03:47.535 "seek_data": false, 00:03:47.535 "copy": true, 00:03:47.535 "nvme_iov_md": false 00:03:47.535 }, 00:03:47.535 "memory_domains": [ 00:03:47.535 { 00:03:47.535 "dma_device_id": "system", 00:03:47.535 "dma_device_type": 1 00:03:47.535 }, 00:03:47.535 { 00:03:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.535 "dma_device_type": 2 00:03:47.535 } 00:03:47.535 ], 00:03:47.535 "driver_specific": {} 00:03:47.535 }, 00:03:47.535 { 00:03:47.535 "name": "Passthru0", 00:03:47.535 "aliases": [ 00:03:47.535 "96480630-813d-583e-8be8-b59d5cd2051b" 00:03:47.535 ], 00:03:47.535 "product_name": "passthru", 00:03:47.535 "block_size": 512, 00:03:47.535 "num_blocks": 16384, 00:03:47.535 "uuid": "96480630-813d-583e-8be8-b59d5cd2051b", 00:03:47.535 "assigned_rate_limits": { 00:03:47.535 "rw_ios_per_sec": 0, 00:03:47.535 "rw_mbytes_per_sec": 0, 00:03:47.535 "r_mbytes_per_sec": 0, 00:03:47.535 "w_mbytes_per_sec": 0 00:03:47.535 }, 00:03:47.535 "claimed": false, 00:03:47.535 "zoned": false, 00:03:47.535 "supported_io_types": { 00:03:47.535 "read": true, 00:03:47.535 "write": true, 00:03:47.535 "unmap": true, 00:03:47.535 "flush": true, 00:03:47.535 "reset": true, 00:03:47.535 "nvme_admin": false, 00:03:47.535 "nvme_io": false, 00:03:47.535 "nvme_io_md": false, 00:03:47.535 "write_zeroes": true, 00:03:47.535 "zcopy": true, 00:03:47.535 "get_zone_info": false, 00:03:47.536 "zone_management": false, 00:03:47.536 "zone_append": false, 00:03:47.536 "compare": false, 00:03:47.536 "compare_and_write": false, 00:03:47.536 "abort": true, 00:03:47.536 "seek_hole": false, 00:03:47.536 "seek_data": false, 00:03:47.536 "copy": true, 00:03:47.536 "nvme_iov_md": false 00:03:47.536 }, 00:03:47.536 "memory_domains": [ 00:03:47.536 { 00:03:47.536 "dma_device_id": "system", 00:03:47.536 "dma_device_type": 1 00:03:47.536 }, 00:03:47.536 { 00:03:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.536 "dma_device_type": 2 00:03:47.536 } 00:03:47.536 ], 00:03:47.536 "driver_specific": { 00:03:47.536 "passthru": { 00:03:47.536 "name": "Passthru0", 00:03:47.536 "base_bdev_name": "Malloc0" 00:03:47.536 } 00:03:47.536 } 00:03:47.536 } 00:03:47.536 ]' 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.536 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.536 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.821 09:42:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.821 00:03:47.821 real 0m0.268s 00:03:47.821 user 0m0.164s 00:03:47.821 sys 0m0.041s 00:03:47.821 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.821 09:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 ************************************ 00:03:47.821 END TEST rpc_integrity 00:03:47.821 ************************************ 00:03:47.821 09:42:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.821 09:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.821 09:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.821 09:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 ************************************ 00:03:47.821 START TEST rpc_plugins 00:03:47.821 ************************************ 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.821 { 00:03:47.821 "name": "Malloc1", 00:03:47.821 "aliases": [ 00:03:47.821 "33671702-6b36-4cf4-a875-2bddda11334f" 00:03:47.821 ], 00:03:47.821 "product_name": "Malloc disk", 00:03:47.821 "block_size": 4096, 00:03:47.821 "num_blocks": 256, 00:03:47.821 "uuid": "33671702-6b36-4cf4-a875-2bddda11334f", 00:03:47.821 "assigned_rate_limits": { 00:03:47.821 "rw_ios_per_sec": 0, 00:03:47.821 "rw_mbytes_per_sec": 0, 00:03:47.821 "r_mbytes_per_sec": 0, 00:03:47.821 "w_mbytes_per_sec": 0 00:03:47.821 }, 00:03:47.821 "claimed": false, 00:03:47.821 "zoned": false, 00:03:47.821 "supported_io_types": { 00:03:47.821 "read": true, 00:03:47.821 "write": true, 00:03:47.821 "unmap": true, 00:03:47.821 "flush": true, 00:03:47.821 "reset": true, 00:03:47.821 "nvme_admin": false, 00:03:47.821 "nvme_io": false, 00:03:47.821 "nvme_io_md": false, 00:03:47.821 "write_zeroes": true, 00:03:47.821 "zcopy": true, 00:03:47.821 "get_zone_info": false, 00:03:47.821 "zone_management": false, 00:03:47.821 "zone_append": false, 00:03:47.821 "compare": false, 00:03:47.821 "compare_and_write": false, 00:03:47.821 "abort": true, 00:03:47.821 "seek_hole": false, 00:03:47.821 "seek_data": false, 00:03:47.821 "copy": true, 00:03:47.821 "nvme_iov_md": false 00:03:47.821 }, 00:03:47.821 "memory_domains": [ 00:03:47.821 { 00:03:47.821 "dma_device_id": "system", 00:03:47.821 "dma_device_type": 1 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.821 "dma_device_type": 2 00:03:47.821 } 00:03:47.821 ], 00:03:47.821 "driver_specific": {} 00:03:47.821 } 00:03:47.821 ]' 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.821 09:42:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.821 00:03:47.821 real 0m0.146s 00:03:47.821 user 0m0.092s 00:03:47.821 sys 0m0.016s 00:03:47.822 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.822 09:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.822 ************************************ 00:03:47.822 END TEST rpc_plugins 00:03:47.822 ************************************ 00:03:47.822 09:42:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.822 09:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.822 09:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.822 09:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.079 ************************************ 00:03:48.079 START TEST rpc_trace_cmd_test 00:03:48.079 ************************************ 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.079 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2457345", 00:03:48.079 "tpoint_group_mask": "0x8", 00:03:48.079 "iscsi_conn": { 00:03:48.079 "mask": "0x2", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "scsi": { 00:03:48.079 "mask": "0x4", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "bdev": { 00:03:48.079 "mask": "0x8", 00:03:48.079 "tpoint_mask": "0xffffffffffffffff" 00:03:48.079 }, 00:03:48.079 "nvmf_rdma": { 00:03:48.079 "mask": "0x10", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "nvmf_tcp": { 00:03:48.079 "mask": "0x20", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "ftl": { 00:03:48.079 "mask": "0x40", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "blobfs": { 00:03:48.079 "mask": "0x80", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "dsa": { 00:03:48.079 "mask": "0x200", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "thread": { 00:03:48.079 "mask": "0x400", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "nvme_pcie": { 00:03:48.079 "mask": "0x800", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "iaa": { 00:03:48.079 "mask": "0x1000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "nvme_tcp": { 00:03:48.079 "mask": "0x2000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "bdev_nvme": { 00:03:48.079 "mask": "0x4000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "sock": { 00:03:48.079 "mask": "0x8000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "blob": { 00:03:48.079 "mask": "0x10000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "bdev_raid": { 00:03:48.079 "mask": "0x20000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 }, 00:03:48.079 "scheduler": { 00:03:48.079 "mask": "0x40000", 00:03:48.079 "tpoint_mask": "0x0" 00:03:48.079 } 00:03:48.079 }' 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.079 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.080 00:03:48.080 real 0m0.220s 00:03:48.080 user 0m0.183s 00:03:48.080 sys 0m0.028s 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.080 09:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.080 ************************************ 00:03:48.080 END TEST rpc_trace_cmd_test 00:03:48.080 ************************************ 00:03:48.338 09:42:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.338 09:42:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.338 09:42:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.338 09:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.338 09:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.338 09:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.338 ************************************ 00:03:48.338 START TEST rpc_daemon_integrity 00:03:48.338 ************************************ 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.338 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.338 { 00:03:48.338 "name": "Malloc2", 00:03:48.338 "aliases": [ 00:03:48.338 "7e41af9b-4e75-4f75-ba25-81e2a3365da5" 00:03:48.338 ], 00:03:48.338 "product_name": "Malloc disk", 00:03:48.338 "block_size": 512, 00:03:48.338 "num_blocks": 16384, 00:03:48.338 "uuid": "7e41af9b-4e75-4f75-ba25-81e2a3365da5", 00:03:48.338 "assigned_rate_limits": { 00:03:48.338 "rw_ios_per_sec": 0, 00:03:48.338 "rw_mbytes_per_sec": 0, 00:03:48.338 "r_mbytes_per_sec": 0, 00:03:48.338 "w_mbytes_per_sec": 0 00:03:48.338 }, 00:03:48.338 "claimed": false, 00:03:48.338 "zoned": false, 00:03:48.338 "supported_io_types": { 00:03:48.338 "read": true, 00:03:48.338 "write": true, 00:03:48.338 "unmap": true, 00:03:48.338 "flush": true, 00:03:48.338 "reset": true, 00:03:48.338 "nvme_admin": false, 00:03:48.338 "nvme_io": false, 00:03:48.338 "nvme_io_md": false, 00:03:48.338 "write_zeroes": true, 00:03:48.338 "zcopy": true, 00:03:48.338 "get_zone_info": false, 00:03:48.338 "zone_management": false, 00:03:48.338 "zone_append": false, 00:03:48.338 "compare": false, 00:03:48.338 "compare_and_write": false, 00:03:48.338 "abort": true, 00:03:48.338 "seek_hole": false, 00:03:48.338 "seek_data": false, 00:03:48.338 "copy": true, 00:03:48.338 "nvme_iov_md": false 00:03:48.338 }, 00:03:48.338 "memory_domains": [ 00:03:48.338 { 00:03:48.338 "dma_device_id": "system", 00:03:48.338 "dma_device_type": 1 00:03:48.338 }, 00:03:48.338 { 00:03:48.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.338 "dma_device_type": 2 00:03:48.338 } 00:03:48.338 ], 00:03:48.338 "driver_specific": {} 00:03:48.338 } 00:03:48.338 ]' 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.339 [2024-11-20 09:42:21.838500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.339 [2024-11-20 09:42:21.838528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.339 [2024-11-20 09:42:21.838539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1712b70 00:03:48.339 [2024-11-20 09:42:21.838546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.339 [2024-11-20 09:42:21.839502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.339 [2024-11-20 09:42:21.839522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.339 Passthru0 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.339 { 00:03:48.339 "name": "Malloc2", 00:03:48.339 "aliases": [ 00:03:48.339 "7e41af9b-4e75-4f75-ba25-81e2a3365da5" 00:03:48.339 ], 00:03:48.339 "product_name": "Malloc disk", 00:03:48.339 "block_size": 512, 00:03:48.339 "num_blocks": 16384, 00:03:48.339 "uuid": "7e41af9b-4e75-4f75-ba25-81e2a3365da5", 00:03:48.339 "assigned_rate_limits": { 00:03:48.339 "rw_ios_per_sec": 0, 00:03:48.339 "rw_mbytes_per_sec": 0, 00:03:48.339 "r_mbytes_per_sec": 0, 00:03:48.339 "w_mbytes_per_sec": 0 00:03:48.339 }, 00:03:48.339 "claimed": true, 00:03:48.339 "claim_type": "exclusive_write", 00:03:48.339 "zoned": false, 00:03:48.339 "supported_io_types": { 00:03:48.339 "read": true, 00:03:48.339 "write": true, 00:03:48.339 "unmap": true, 00:03:48.339 "flush": true, 00:03:48.339 "reset": true, 00:03:48.339 "nvme_admin": false, 00:03:48.339 "nvme_io": false, 00:03:48.339 "nvme_io_md": false, 00:03:48.339 "write_zeroes": true, 00:03:48.339 "zcopy": true, 00:03:48.339 "get_zone_info": false, 00:03:48.339 "zone_management": false, 00:03:48.339 "zone_append": false, 00:03:48.339 "compare": false, 00:03:48.339 "compare_and_write": false, 00:03:48.339 "abort": true, 00:03:48.339 "seek_hole": false, 00:03:48.339 "seek_data": false, 00:03:48.339 "copy": true, 00:03:48.339 "nvme_iov_md": false 00:03:48.339 }, 00:03:48.339 "memory_domains": [ 00:03:48.339 { 00:03:48.339 "dma_device_id": "system", 00:03:48.339 "dma_device_type": 1 00:03:48.339 }, 00:03:48.339 { 00:03:48.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.339 "dma_device_type": 2 00:03:48.339 } 00:03:48.339 ], 00:03:48.339 "driver_specific": {} 00:03:48.339 }, 00:03:48.339 { 00:03:48.339 "name": "Passthru0", 00:03:48.339 "aliases": [ 00:03:48.339 "4c0667d3-8476-5744-be20-67abd76b013d" 00:03:48.339 ], 00:03:48.339 "product_name": "passthru", 00:03:48.339 "block_size": 512, 00:03:48.339 "num_blocks": 16384, 00:03:48.339 "uuid": "4c0667d3-8476-5744-be20-67abd76b013d", 00:03:48.339 "assigned_rate_limits": { 00:03:48.339 "rw_ios_per_sec": 0, 00:03:48.339 "rw_mbytes_per_sec": 0, 00:03:48.339 "r_mbytes_per_sec": 0, 00:03:48.339 "w_mbytes_per_sec": 0 00:03:48.339 }, 00:03:48.339 "claimed": false, 00:03:48.339 "zoned": false, 00:03:48.339 "supported_io_types": { 00:03:48.339 "read": true, 00:03:48.339 "write": true, 00:03:48.339 "unmap": true, 00:03:48.339 "flush": true, 00:03:48.339 "reset": true, 00:03:48.339 "nvme_admin": false, 00:03:48.339 "nvme_io": false, 00:03:48.339 "nvme_io_md": false, 00:03:48.339 "write_zeroes": true, 00:03:48.339 "zcopy": true, 00:03:48.339 "get_zone_info": false, 00:03:48.339 "zone_management": false, 00:03:48.339 "zone_append": false, 00:03:48.339 "compare": false, 00:03:48.339 "compare_and_write": false, 00:03:48.339 "abort": true, 00:03:48.339 "seek_hole": false, 00:03:48.339 "seek_data": false, 00:03:48.339 "copy": true, 00:03:48.339 "nvme_iov_md": false 00:03:48.339 }, 00:03:48.339 "memory_domains": [ 00:03:48.339 { 00:03:48.339 "dma_device_id": "system", 00:03:48.339 "dma_device_type": 1 00:03:48.339 }, 00:03:48.339 { 00:03:48.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.339 "dma_device_type": 2 00:03:48.339 } 00:03:48.339 ], 00:03:48.339 "driver_specific": { 00:03:48.339 "passthru": { 00:03:48.339 "name": "Passthru0", 00:03:48.339 "base_bdev_name": "Malloc2" 00:03:48.339 } 00:03:48.339 } 00:03:48.339 } 00:03:48.339 ]' 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.339 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.598 00:03:48.598 real 0m0.279s 00:03:48.598 user 0m0.185s 00:03:48.598 sys 0m0.032s 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.598 09:42:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.598 ************************************ 00:03:48.598 END TEST rpc_daemon_integrity 00:03:48.598 ************************************ 00:03:48.598 09:42:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.598 09:42:22 rpc -- rpc/rpc.sh@84 -- # killprocess 2457345 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 2457345 ']' 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@958 -- # kill -0 2457345 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@959 -- # uname 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2457345 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2457345' 00:03:48.598 killing process with pid 2457345 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@973 -- # kill 2457345 00:03:48.598 09:42:22 rpc -- common/autotest_common.sh@978 -- # wait 2457345 00:03:48.857 00:03:48.857 real 0m2.606s 00:03:48.857 user 0m3.355s 00:03:48.857 sys 0m0.709s 00:03:48.857 09:42:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.857 09:42:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.857 ************************************ 00:03:48.857 END TEST rpc 00:03:48.857 ************************************ 00:03:48.857 09:42:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:48.857 09:42:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.857 09:42:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.857 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:03:49.117 ************************************ 00:03:49.117 START TEST skip_rpc 00:03:49.117 ************************************ 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.117 * Looking for test storage... 00:03:49.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.117 09:42:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.117 --rc genhtml_branch_coverage=1 00:03:49.117 --rc genhtml_function_coverage=1 00:03:49.117 --rc genhtml_legend=1 00:03:49.117 --rc geninfo_all_blocks=1 00:03:49.117 --rc geninfo_unexecuted_blocks=1 00:03:49.117 00:03:49.117 ' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.117 --rc genhtml_branch_coverage=1 00:03:49.117 --rc genhtml_function_coverage=1 00:03:49.117 --rc genhtml_legend=1 00:03:49.117 --rc geninfo_all_blocks=1 00:03:49.117 --rc geninfo_unexecuted_blocks=1 00:03:49.117 00:03:49.117 ' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.117 --rc genhtml_branch_coverage=1 00:03:49.117 --rc genhtml_function_coverage=1 00:03:49.117 --rc genhtml_legend=1 00:03:49.117 --rc geninfo_all_blocks=1 00:03:49.117 --rc geninfo_unexecuted_blocks=1 00:03:49.117 00:03:49.117 ' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.117 --rc genhtml_branch_coverage=1 00:03:49.117 --rc genhtml_function_coverage=1 00:03:49.117 --rc genhtml_legend=1 00:03:49.117 --rc geninfo_all_blocks=1 00:03:49.117 --rc geninfo_unexecuted_blocks=1 00:03:49.117 00:03:49.117 ' 00:03:49.117 09:42:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.117 09:42:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.117 09:42:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.117 09:42:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.117 ************************************ 00:03:49.117 START TEST skip_rpc 00:03:49.117 ************************************ 00:03:49.117 09:42:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:49.117 09:42:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2457993 00:03:49.117 09:42:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.117 09:42:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.117 09:42:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.377 [2024-11-20 09:42:22.699810] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:03:49.377 [2024-11-20 09:42:22.699848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457993 ] 00:03:49.377 [2024-11-20 09:42:22.773721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.377 [2024-11-20 09:42:22.813043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2457993 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2457993 ']' 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2457993 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2457993 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2457993' 00:03:54.639 killing process with pid 2457993 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2457993 00:03:54.639 09:42:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2457993 00:03:54.639 00:03:54.639 real 0m5.367s 00:03:54.639 user 0m5.120s 00:03:54.639 sys 0m0.278s 00:03:54.639 09:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.639 09:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.639 ************************************ 00:03:54.639 END TEST skip_rpc 00:03:54.639 ************************************ 00:03:54.639 09:42:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:54.639 09:42:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.639 09:42:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.639 09:42:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.639 ************************************ 00:03:54.639 START TEST skip_rpc_with_json 00:03:54.639 ************************************ 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2458934 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2458934 00:03:54.639 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2458934 ']' 00:03:54.640 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.640 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.640 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.640 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.640 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.640 [2024-11-20 09:42:28.146225] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:03:54.640 [2024-11-20 09:42:28.146273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458934 ] 00:03:54.898 [2024-11-20 09:42:28.220848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.898 [2024-11-20 09:42:28.260687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.463 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.464 [2024-11-20 09:42:28.981430] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:55.464 request: 00:03:55.464 { 00:03:55.464 "trtype": "tcp", 00:03:55.464 "method": "nvmf_get_transports", 00:03:55.464 "req_id": 1 00:03:55.464 } 00:03:55.464 Got JSON-RPC error response 00:03:55.464 response: 00:03:55.464 { 00:03:55.464 "code": -19, 00:03:55.464 "message": "No such device" 00:03:55.464 } 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.464 [2024-11-20 09:42:28.993525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.464 09:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.723 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.723 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.723 { 00:03:55.723 "subsystems": [ 00:03:55.723 { 00:03:55.723 "subsystem": "fsdev", 00:03:55.723 "config": [ 00:03:55.723 { 00:03:55.723 "method": "fsdev_set_opts", 00:03:55.723 "params": { 00:03:55.723 "fsdev_io_pool_size": 65535, 00:03:55.723 "fsdev_io_cache_size": 256 00:03:55.723 } 00:03:55.723 } 00:03:55.723 ] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "vfio_user_target", 00:03:55.723 "config": null 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "keyring", 00:03:55.723 "config": [] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "iobuf", 00:03:55.723 "config": [ 00:03:55.723 { 00:03:55.723 "method": "iobuf_set_options", 00:03:55.723 "params": { 00:03:55.723 "small_pool_count": 8192, 00:03:55.723 "large_pool_count": 1024, 00:03:55.723 "small_bufsize": 8192, 00:03:55.723 "large_bufsize": 135168, 00:03:55.723 "enable_numa": false 00:03:55.723 } 00:03:55.723 } 00:03:55.723 ] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "sock", 00:03:55.723 "config": [ 00:03:55.723 { 00:03:55.723 "method": "sock_set_default_impl", 00:03:55.723 "params": { 00:03:55.723 "impl_name": "posix" 00:03:55.723 } 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "method": "sock_impl_set_options", 00:03:55.723 "params": { 00:03:55.723 "impl_name": "ssl", 00:03:55.723 "recv_buf_size": 4096, 00:03:55.723 "send_buf_size": 4096, 00:03:55.723 "enable_recv_pipe": true, 00:03:55.723 "enable_quickack": false, 00:03:55.723 "enable_placement_id": 0, 00:03:55.723 "enable_zerocopy_send_server": true, 00:03:55.723 "enable_zerocopy_send_client": false, 00:03:55.723 "zerocopy_threshold": 0, 00:03:55.723 "tls_version": 0, 00:03:55.723 "enable_ktls": false 00:03:55.723 } 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "method": "sock_impl_set_options", 00:03:55.723 "params": { 00:03:55.723 "impl_name": "posix", 00:03:55.723 "recv_buf_size": 2097152, 00:03:55.723 "send_buf_size": 2097152, 00:03:55.723 "enable_recv_pipe": true, 00:03:55.723 "enable_quickack": false, 00:03:55.723 "enable_placement_id": 0, 00:03:55.723 "enable_zerocopy_send_server": true, 00:03:55.723 "enable_zerocopy_send_client": false, 00:03:55.723 "zerocopy_threshold": 0, 00:03:55.723 "tls_version": 0, 00:03:55.723 "enable_ktls": false 00:03:55.723 } 00:03:55.723 } 00:03:55.723 ] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "vmd", 00:03:55.723 "config": [] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "accel", 00:03:55.723 "config": [ 00:03:55.723 { 00:03:55.723 "method": "accel_set_options", 00:03:55.723 "params": { 00:03:55.723 "small_cache_size": 128, 00:03:55.723 "large_cache_size": 16, 00:03:55.723 "task_count": 2048, 00:03:55.723 "sequence_count": 2048, 00:03:55.723 "buf_count": 2048 00:03:55.723 } 00:03:55.723 } 00:03:55.723 ] 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "subsystem": "bdev", 00:03:55.723 "config": [ 00:03:55.723 { 00:03:55.723 "method": "bdev_set_options", 00:03:55.723 "params": { 00:03:55.723 "bdev_io_pool_size": 65535, 00:03:55.723 "bdev_io_cache_size": 256, 00:03:55.723 "bdev_auto_examine": true, 00:03:55.723 "iobuf_small_cache_size": 128, 00:03:55.723 "iobuf_large_cache_size": 16 00:03:55.723 } 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "method": "bdev_raid_set_options", 00:03:55.723 "params": { 00:03:55.723 "process_window_size_kb": 1024, 00:03:55.723 "process_max_bandwidth_mb_sec": 0 00:03:55.723 } 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "method": "bdev_iscsi_set_options", 00:03:55.723 "params": { 00:03:55.723 "timeout_sec": 30 00:03:55.723 } 00:03:55.723 }, 00:03:55.723 { 00:03:55.723 "method": "bdev_nvme_set_options", 00:03:55.723 "params": { 00:03:55.723 "action_on_timeout": "none", 00:03:55.723 "timeout_us": 0, 00:03:55.723 "timeout_admin_us": 0, 00:03:55.723 "keep_alive_timeout_ms": 10000, 00:03:55.723 "arbitration_burst": 0, 00:03:55.723 "low_priority_weight": 0, 00:03:55.723 "medium_priority_weight": 0, 00:03:55.723 "high_priority_weight": 0, 00:03:55.723 "nvme_adminq_poll_period_us": 10000, 00:03:55.723 "nvme_ioq_poll_period_us": 0, 00:03:55.723 "io_queue_requests": 0, 00:03:55.723 "delay_cmd_submit": true, 00:03:55.723 "transport_retry_count": 4, 00:03:55.723 "bdev_retry_count": 3, 00:03:55.723 "transport_ack_timeout": 0, 00:03:55.723 "ctrlr_loss_timeout_sec": 0, 00:03:55.723 "reconnect_delay_sec": 0, 00:03:55.723 "fast_io_fail_timeout_sec": 0, 00:03:55.723 "disable_auto_failback": false, 00:03:55.723 "generate_uuids": false, 00:03:55.723 "transport_tos": 0, 00:03:55.723 "nvme_error_stat": false, 00:03:55.723 "rdma_srq_size": 0, 00:03:55.723 "io_path_stat": false, 00:03:55.723 "allow_accel_sequence": false, 00:03:55.723 "rdma_max_cq_size": 0, 00:03:55.723 "rdma_cm_event_timeout_ms": 0, 00:03:55.723 "dhchap_digests": [ 00:03:55.723 "sha256", 00:03:55.723 "sha384", 00:03:55.723 "sha512" 00:03:55.723 ], 00:03:55.723 "dhchap_dhgroups": [ 00:03:55.723 "null", 00:03:55.723 "ffdhe2048", 00:03:55.723 "ffdhe3072", 00:03:55.723 "ffdhe4096", 00:03:55.723 "ffdhe6144", 00:03:55.723 "ffdhe8192" 00:03:55.724 ] 00:03:55.724 } 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "method": "bdev_nvme_set_hotplug", 00:03:55.724 "params": { 00:03:55.724 "period_us": 100000, 00:03:55.724 "enable": false 00:03:55.724 } 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "method": "bdev_wait_for_examine" 00:03:55.724 } 00:03:55.724 ] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "scsi", 00:03:55.724 "config": null 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "scheduler", 00:03:55.724 "config": [ 00:03:55.724 { 00:03:55.724 "method": "framework_set_scheduler", 00:03:55.724 "params": { 00:03:55.724 "name": "static" 00:03:55.724 } 00:03:55.724 } 00:03:55.724 ] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "vhost_scsi", 00:03:55.724 "config": [] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "vhost_blk", 00:03:55.724 "config": [] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "ublk", 00:03:55.724 "config": [] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "nbd", 00:03:55.724 "config": [] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "nvmf", 00:03:55.724 "config": [ 00:03:55.724 { 00:03:55.724 "method": "nvmf_set_config", 00:03:55.724 "params": { 00:03:55.724 "discovery_filter": "match_any", 00:03:55.724 "admin_cmd_passthru": { 00:03:55.724 "identify_ctrlr": false 00:03:55.724 }, 00:03:55.724 "dhchap_digests": [ 00:03:55.724 "sha256", 00:03:55.724 "sha384", 00:03:55.724 "sha512" 00:03:55.724 ], 00:03:55.724 "dhchap_dhgroups": [ 00:03:55.724 "null", 00:03:55.724 "ffdhe2048", 00:03:55.724 "ffdhe3072", 00:03:55.724 "ffdhe4096", 00:03:55.724 "ffdhe6144", 00:03:55.724 "ffdhe8192" 00:03:55.724 ] 00:03:55.724 } 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "method": "nvmf_set_max_subsystems", 00:03:55.724 "params": { 00:03:55.724 "max_subsystems": 1024 00:03:55.724 } 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "method": "nvmf_set_crdt", 00:03:55.724 "params": { 00:03:55.724 "crdt1": 0, 00:03:55.724 "crdt2": 0, 00:03:55.724 "crdt3": 0 00:03:55.724 } 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "method": "nvmf_create_transport", 00:03:55.724 "params": { 00:03:55.724 "trtype": "TCP", 00:03:55.724 "max_queue_depth": 128, 00:03:55.724 "max_io_qpairs_per_ctrlr": 127, 00:03:55.724 "in_capsule_data_size": 4096, 00:03:55.724 "max_io_size": 131072, 00:03:55.724 "io_unit_size": 131072, 00:03:55.724 "max_aq_depth": 128, 00:03:55.724 "num_shared_buffers": 511, 00:03:55.724 "buf_cache_size": 4294967295, 00:03:55.724 "dif_insert_or_strip": false, 00:03:55.724 "zcopy": false, 00:03:55.724 "c2h_success": true, 00:03:55.724 "sock_priority": 0, 00:03:55.724 "abort_timeout_sec": 1, 00:03:55.724 "ack_timeout": 0, 00:03:55.724 "data_wr_pool_size": 0 00:03:55.724 } 00:03:55.724 } 00:03:55.724 ] 00:03:55.724 }, 00:03:55.724 { 00:03:55.724 "subsystem": "iscsi", 00:03:55.724 "config": [ 00:03:55.724 { 00:03:55.724 "method": "iscsi_set_options", 00:03:55.724 "params": { 00:03:55.724 "node_base": "iqn.2016-06.io.spdk", 00:03:55.724 "max_sessions": 128, 00:03:55.724 "max_connections_per_session": 2, 00:03:55.724 "max_queue_depth": 64, 00:03:55.724 "default_time2wait": 2, 00:03:55.724 "default_time2retain": 20, 00:03:55.724 "first_burst_length": 8192, 00:03:55.724 "immediate_data": true, 00:03:55.724 "allow_duplicated_isid": false, 00:03:55.724 "error_recovery_level": 0, 00:03:55.724 "nop_timeout": 60, 00:03:55.724 "nop_in_interval": 30, 00:03:55.724 "disable_chap": false, 00:03:55.724 "require_chap": false, 00:03:55.724 "mutual_chap": false, 00:03:55.724 "chap_group": 0, 00:03:55.724 "max_large_datain_per_connection": 64, 00:03:55.724 "max_r2t_per_connection": 4, 00:03:55.724 "pdu_pool_size": 36864, 00:03:55.724 "immediate_data_pool_size": 16384, 00:03:55.724 "data_out_pool_size": 2048 00:03:55.724 } 00:03:55.724 } 00:03:55.724 ] 00:03:55.724 } 00:03:55.724 ] 00:03:55.724 } 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2458934 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2458934 ']' 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2458934 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458934 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458934' 00:03:55.724 killing process with pid 2458934 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2458934 00:03:55.724 09:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2458934 00:03:55.983 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2459186 00:03:55.983 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.983 09:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2459186 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2459186 ']' 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2459186 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459186 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459186' 00:04:01.247 killing process with pid 2459186 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2459186 00:04:01.247 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2459186 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.506 00:04:01.506 real 0m6.787s 00:04:01.506 user 0m6.643s 00:04:01.506 sys 0m0.613s 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.506 ************************************ 00:04:01.506 END TEST skip_rpc_with_json 00:04:01.506 ************************************ 00:04:01.506 09:42:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:01.506 09:42:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.506 09:42:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.506 09:42:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.506 ************************************ 00:04:01.506 START TEST skip_rpc_with_delay 00:04:01.506 ************************************ 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.506 09:42:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.506 [2024-11-20 09:42:35.001846] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.506 00:04:01.506 real 0m0.069s 00:04:01.506 user 0m0.043s 00:04:01.506 sys 0m0.025s 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.506 09:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:01.506 ************************************ 00:04:01.506 END TEST skip_rpc_with_delay 00:04:01.507 ************************************ 00:04:01.507 09:42:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:01.507 09:42:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:01.507 09:42:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:01.507 09:42:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.507 09:42:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.507 09:42:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.765 ************************************ 00:04:01.765 START TEST exit_on_failed_rpc_init 00:04:01.765 ************************************ 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2460155 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2460155 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2460155 ']' 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.765 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.765 [2024-11-20 09:42:35.137382] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:01.765 [2024-11-20 09:42:35.137422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460155 ] 00:04:01.765 [2024-11-20 09:42:35.212157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.765 [2024-11-20 09:42:35.254384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.023 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:02.024 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.024 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:02.024 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.024 [2024-11-20 09:42:35.519728] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:02.024 [2024-11-20 09:42:35.519772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460163 ] 00:04:02.024 [2024-11-20 09:42:35.593233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.283 [2024-11-20 09:42:35.634044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.283 [2024-11-20 09:42:35.634094] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:02.283 [2024-11-20 09:42:35.634103] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:02.283 [2024-11-20 09:42:35.634111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2460155 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2460155 ']' 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2460155 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2460155 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2460155' 00:04:02.283 killing process with pid 2460155 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2460155 00:04:02.283 09:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2460155 00:04:02.542 00:04:02.542 real 0m0.943s 00:04:02.542 user 0m1.019s 00:04:02.542 sys 0m0.373s 00:04:02.542 09:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.542 09:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.542 ************************************ 00:04:02.542 END TEST exit_on_failed_rpc_init 00:04:02.542 ************************************ 00:04:02.542 09:42:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.542 00:04:02.542 real 0m13.626s 00:04:02.542 user 0m13.043s 00:04:02.542 sys 0m1.563s 00:04:02.542 09:42:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.542 09:42:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.542 ************************************ 00:04:02.542 END TEST skip_rpc 00:04:02.542 ************************************ 00:04:02.542 09:42:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.542 09:42:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.542 09:42:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.542 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:04:02.802 ************************************ 00:04:02.802 START TEST rpc_client 00:04:02.802 ************************************ 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.802 * Looking for test storage... 00:04:02.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.802 09:42:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.802 --rc genhtml_branch_coverage=1 00:04:02.802 --rc genhtml_function_coverage=1 00:04:02.802 --rc genhtml_legend=1 00:04:02.802 --rc geninfo_all_blocks=1 00:04:02.802 --rc geninfo_unexecuted_blocks=1 00:04:02.802 00:04:02.802 ' 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.802 --rc genhtml_branch_coverage=1 00:04:02.802 --rc genhtml_function_coverage=1 00:04:02.802 --rc genhtml_legend=1 00:04:02.802 --rc geninfo_all_blocks=1 00:04:02.802 --rc geninfo_unexecuted_blocks=1 00:04:02.802 00:04:02.802 ' 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.802 --rc genhtml_branch_coverage=1 00:04:02.802 --rc genhtml_function_coverage=1 00:04:02.802 --rc genhtml_legend=1 00:04:02.802 --rc geninfo_all_blocks=1 00:04:02.802 --rc geninfo_unexecuted_blocks=1 00:04:02.802 00:04:02.802 ' 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.802 --rc genhtml_branch_coverage=1 00:04:02.802 --rc genhtml_function_coverage=1 00:04:02.802 --rc genhtml_legend=1 00:04:02.802 --rc geninfo_all_blocks=1 00:04:02.802 --rc geninfo_unexecuted_blocks=1 00:04:02.802 00:04:02.802 ' 00:04:02.802 09:42:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:02.802 OK 00:04:02.802 09:42:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:02.802 00:04:02.802 real 0m0.189s 00:04:02.802 user 0m0.119s 00:04:02.802 sys 0m0.084s 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.802 09:42:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:02.802 ************************************ 00:04:02.802 END TEST rpc_client 00:04:02.802 ************************************ 00:04:02.802 09:42:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.802 09:42:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.802 09:42:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.802 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:04:03.062 ************************************ 00:04:03.062 START TEST json_config 00:04:03.062 ************************************ 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.062 09:42:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.062 09:42:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.062 09:42:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.062 09:42:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.062 09:42:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.062 09:42:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:03.062 09:42:36 json_config -- scripts/common.sh@345 -- # : 1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.062 09:42:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.062 09:42:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@353 -- # local d=1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.062 09:42:36 json_config -- scripts/common.sh@355 -- # echo 1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.062 09:42:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@353 -- # local d=2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.062 09:42:36 json_config -- scripts/common.sh@355 -- # echo 2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.062 09:42:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.062 09:42:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.062 09:42:36 json_config -- scripts/common.sh@368 -- # return 0 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.062 --rc genhtml_branch_coverage=1 00:04:03.062 --rc genhtml_function_coverage=1 00:04:03.062 --rc genhtml_legend=1 00:04:03.062 --rc geninfo_all_blocks=1 00:04:03.062 --rc geninfo_unexecuted_blocks=1 00:04:03.062 00:04:03.062 ' 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.062 --rc genhtml_branch_coverage=1 00:04:03.062 --rc genhtml_function_coverage=1 00:04:03.062 --rc genhtml_legend=1 00:04:03.062 --rc geninfo_all_blocks=1 00:04:03.062 --rc geninfo_unexecuted_blocks=1 00:04:03.062 00:04:03.062 ' 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.062 --rc genhtml_branch_coverage=1 00:04:03.062 --rc genhtml_function_coverage=1 00:04:03.062 --rc genhtml_legend=1 00:04:03.062 --rc geninfo_all_blocks=1 00:04:03.062 --rc geninfo_unexecuted_blocks=1 00:04:03.062 00:04:03.062 ' 00:04:03.062 09:42:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.062 --rc genhtml_branch_coverage=1 00:04:03.062 --rc genhtml_function_coverage=1 00:04:03.062 --rc genhtml_legend=1 00:04:03.062 --rc geninfo_all_blocks=1 00:04:03.062 --rc geninfo_unexecuted_blocks=1 00:04:03.062 00:04:03.062 ' 00:04:03.062 09:42:36 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.062 09:42:36 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.062 09:42:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.062 09:42:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.063 09:42:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.063 09:42:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.063 09:42:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.063 09:42:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.063 09:42:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.063 09:42:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:03.063 09:42:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@51 -- # : 0 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.063 09:42:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:03.063 INFO: JSON configuration test init 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.063 09:42:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:03.063 09:42:36 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.063 09:42:36 json_config -- json_config/common.sh@10 -- # shift 00:04:03.063 09:42:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.063 09:42:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.063 09:42:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.063 09:42:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.063 09:42:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.063 09:42:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2460515 00:04:03.063 09:42:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.063 Waiting for target to run... 00:04:03.063 09:42:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:03.063 09:42:36 json_config -- json_config/common.sh@25 -- # waitforlisten 2460515 /var/tmp/spdk_tgt.sock 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 2460515 ']' 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.063 09:42:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.322 [2024-11-20 09:42:36.644502] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:03.322 [2024-11-20 09:42:36.644546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460515 ] 00:04:03.580 [2024-11-20 09:42:36.936799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.580 [2024-11-20 09:42:36.971239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:04.147 09:42:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.147 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.147 09:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:04.147 09:42:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:04.147 09:42:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:07.458 09:42:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@54 -- # sort 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.458 09:42:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:07.458 09:42:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.458 09:42:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.458 MallocForNvmf0 00:04:07.717 09:42:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.717 09:42:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.717 MallocForNvmf1 00:04:07.717 09:42:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.717 09:42:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.975 [2024-11-20 09:42:41.431126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.975 09:42:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.975 09:42:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.233 09:42:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.233 09:42:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.491 09:42:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.491 09:42:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.491 09:42:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.491 09:42:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.748 [2024-11-20 09:42:42.209551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.748 09:42:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:08.748 09:42:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.748 09:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.748 09:42:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:08.748 09:42:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.748 09:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.748 09:42:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:08.748 09:42:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.748 09:42:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.006 MallocBdevForConfigChangeCheck 00:04:09.006 09:42:42 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:09.006 09:42:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.006 09:42:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.006 09:42:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:09.006 09:42:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.572 09:42:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:09.572 INFO: shutting down applications... 00:04:09.572 09:42:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:09.572 09:42:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:09.572 09:42:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:09.572 09:42:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:11.469 Calling clear_iscsi_subsystem 00:04:11.469 Calling clear_nvmf_subsystem 00:04:11.469 Calling clear_nbd_subsystem 00:04:11.469 Calling clear_ublk_subsystem 00:04:11.469 Calling clear_vhost_blk_subsystem 00:04:11.469 Calling clear_vhost_scsi_subsystem 00:04:11.469 Calling clear_bdev_subsystem 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:11.469 09:42:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:12.035 09:42:45 json_config -- json_config/json_config.sh@352 -- # break 00:04:12.035 09:42:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:12.035 09:42:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:12.035 09:42:45 json_config -- json_config/common.sh@31 -- # local app=target 00:04:12.035 09:42:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.035 09:42:45 json_config -- json_config/common.sh@35 -- # [[ -n 2460515 ]] 00:04:12.035 09:42:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2460515 00:04:12.035 09:42:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.035 09:42:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.036 09:42:45 json_config -- json_config/common.sh@41 -- # kill -0 2460515 00:04:12.036 09:42:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.604 09:42:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.604 09:42:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.604 09:42:45 json_config -- json_config/common.sh@41 -- # kill -0 2460515 00:04:12.604 09:42:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.604 09:42:45 json_config -- json_config/common.sh@43 -- # break 00:04:12.604 09:42:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.604 09:42:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.604 SPDK target shutdown done 00:04:12.604 09:42:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:12.604 INFO: relaunching applications... 00:04:12.604 09:42:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.604 09:42:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.604 09:42:45 json_config -- json_config/common.sh@10 -- # shift 00:04:12.604 09:42:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.604 09:42:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.604 09:42:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.604 09:42:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.604 09:42:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.604 09:42:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2462251 00:04:12.604 09:42:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.604 Waiting for target to run... 00:04:12.604 09:42:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.604 09:42:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2462251 /var/tmp/spdk_tgt.sock 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 2462251 ']' 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.604 09:42:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.604 [2024-11-20 09:42:45.957746] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:12.604 [2024-11-20 09:42:45.957804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462251 ] 00:04:12.863 [2024-11-20 09:42:46.253196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.863 [2024-11-20 09:42:46.288581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.144 [2024-11-20 09:42:49.316149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.144 [2024-11-20 09:42:49.348505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.144 09:42:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.144 09:42:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:16.144 09:42:49 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.144 00:04:16.144 09:42:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:16.144 09:42:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:16.144 INFO: Checking if target configuration is the same... 00:04:16.144 09:42:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.144 09:42:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:16.144 09:42:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.144 + '[' 2 -ne 2 ']' 00:04:16.144 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.144 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.144 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.144 +++ basename /dev/fd/62 00:04:16.144 ++ mktemp /tmp/62.XXX 00:04:16.144 + tmp_file_1=/tmp/62.pkO 00:04:16.144 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.144 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.144 + tmp_file_2=/tmp/spdk_tgt_config.json.Q9Y 00:04:16.144 + ret=0 00:04:16.144 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.402 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.402 + diff -u /tmp/62.pkO /tmp/spdk_tgt_config.json.Q9Y 00:04:16.402 + echo 'INFO: JSON config files are the same' 00:04:16.402 INFO: JSON config files are the same 00:04:16.402 + rm /tmp/62.pkO /tmp/spdk_tgt_config.json.Q9Y 00:04:16.402 + exit 0 00:04:16.402 09:42:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:16.402 09:42:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:16.402 INFO: changing configuration and checking if this can be detected... 00:04:16.402 09:42:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.402 09:42:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.660 09:42:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.660 09:42:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:16.660 09:42:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.660 + '[' 2 -ne 2 ']' 00:04:16.660 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.660 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.660 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.660 +++ basename /dev/fd/62 00:04:16.660 ++ mktemp /tmp/62.XXX 00:04:16.660 + tmp_file_1=/tmp/62.8IF 00:04:16.660 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.660 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.660 + tmp_file_2=/tmp/spdk_tgt_config.json.gW4 00:04:16.660 + ret=0 00:04:16.660 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.917 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.917 + diff -u /tmp/62.8IF /tmp/spdk_tgt_config.json.gW4 00:04:16.917 + ret=1 00:04:16.917 + echo '=== Start of file: /tmp/62.8IF ===' 00:04:16.917 + cat /tmp/62.8IF 00:04:16.917 + echo '=== End of file: /tmp/62.8IF ===' 00:04:16.917 + echo '' 00:04:16.917 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gW4 ===' 00:04:16.917 + cat /tmp/spdk_tgt_config.json.gW4 00:04:16.917 + echo '=== End of file: /tmp/spdk_tgt_config.json.gW4 ===' 00:04:16.917 + echo '' 00:04:16.917 + rm /tmp/62.8IF /tmp/spdk_tgt_config.json.gW4 00:04:16.917 + exit 1 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:16.917 INFO: configuration change detected. 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:16.917 09:42:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.917 09:42:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:16.917 09:42:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 2462251 ]] 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.918 09:42:50 json_config -- json_config/json_config.sh@330 -- # killprocess 2462251 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 2462251 ']' 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@958 -- # kill -0 2462251 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@959 -- # uname 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.918 09:42:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462251 00:04:17.176 09:42:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.176 09:42:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.176 09:42:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462251' 00:04:17.176 killing process with pid 2462251 00:04:17.176 09:42:50 json_config -- common/autotest_common.sh@973 -- # kill 2462251 00:04:17.176 09:42:50 json_config -- common/autotest_common.sh@978 -- # wait 2462251 00:04:19.075 09:42:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.075 09:42:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:19.075 09:42:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.075 09:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.075 09:42:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:19.075 09:42:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:19.075 INFO: Success 00:04:19.075 00:04:19.075 real 0m16.236s 00:04:19.075 user 0m16.879s 00:04:19.075 sys 0m2.380s 00:04:19.075 09:42:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.075 09:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.075 ************************************ 00:04:19.075 END TEST json_config 00:04:19.075 ************************************ 00:04:19.335 09:42:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.335 09:42:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.335 09:42:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.335 09:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:19.335 ************************************ 00:04:19.335 START TEST json_config_extra_key 00:04:19.335 ************************************ 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.335 --rc genhtml_branch_coverage=1 00:04:19.335 --rc genhtml_function_coverage=1 00:04:19.335 --rc genhtml_legend=1 00:04:19.335 --rc geninfo_all_blocks=1 00:04:19.335 --rc geninfo_unexecuted_blocks=1 00:04:19.335 00:04:19.335 ' 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.335 --rc genhtml_branch_coverage=1 00:04:19.335 --rc genhtml_function_coverage=1 00:04:19.335 --rc genhtml_legend=1 00:04:19.335 --rc geninfo_all_blocks=1 00:04:19.335 --rc geninfo_unexecuted_blocks=1 00:04:19.335 00:04:19.335 ' 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.335 --rc genhtml_branch_coverage=1 00:04:19.335 --rc genhtml_function_coverage=1 00:04:19.335 --rc genhtml_legend=1 00:04:19.335 --rc geninfo_all_blocks=1 00:04:19.335 --rc geninfo_unexecuted_blocks=1 00:04:19.335 00:04:19.335 ' 00:04:19.335 09:42:52 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.335 --rc genhtml_branch_coverage=1 00:04:19.335 --rc genhtml_function_coverage=1 00:04:19.335 --rc genhtml_legend=1 00:04:19.335 --rc geninfo_all_blocks=1 00:04:19.335 --rc geninfo_unexecuted_blocks=1 00:04:19.335 00:04:19.335 ' 00:04:19.335 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.335 09:42:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.335 09:42:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.336 09:42:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.336 09:42:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.336 09:42:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.336 09:42:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.336 09:42:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.336 09:42:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:19.336 09:42:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.336 09:42:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:19.336 INFO: launching applications... 00:04:19.336 09:42:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2463526 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.336 Waiting for target to run... 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2463526 /var/tmp/spdk_tgt.sock 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2463526 ']' 00:04:19.336 09:42:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.336 09:42:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.595 [2024-11-20 09:42:52.936515] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:19.595 [2024-11-20 09:42:52.936560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463526 ] 00:04:19.853 [2024-11-20 09:42:53.237568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.853 [2024-11-20 09:42:53.273065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.422 09:42:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.422 09:42:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:20.422 00:04:20.422 09:42:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:20.422 INFO: shutting down applications... 00:04:20.422 09:42:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2463526 ]] 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2463526 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2463526 00:04:20.422 09:42:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2463526 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.712 09:42:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.712 SPDK target shutdown done 00:04:20.712 09:42:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:20.712 Success 00:04:20.712 00:04:20.712 real 0m1.563s 00:04:20.712 user 0m1.319s 00:04:20.712 sys 0m0.415s 00:04:20.712 09:42:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.712 09:42:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.712 ************************************ 00:04:20.712 END TEST json_config_extra_key 00:04:20.712 ************************************ 00:04:21.038 09:42:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.038 09:42:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.038 09:42:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.038 09:42:54 -- common/autotest_common.sh@10 -- # set +x 00:04:21.038 ************************************ 00:04:21.038 START TEST alias_rpc 00:04:21.038 ************************************ 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.038 * Looking for test storage... 00:04:21.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.038 09:42:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.038 --rc genhtml_branch_coverage=1 00:04:21.038 --rc genhtml_function_coverage=1 00:04:21.038 --rc genhtml_legend=1 00:04:21.038 --rc geninfo_all_blocks=1 00:04:21.038 --rc geninfo_unexecuted_blocks=1 00:04:21.038 00:04:21.038 ' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.038 --rc genhtml_branch_coverage=1 00:04:21.038 --rc genhtml_function_coverage=1 00:04:21.038 --rc genhtml_legend=1 00:04:21.038 --rc geninfo_all_blocks=1 00:04:21.038 --rc geninfo_unexecuted_blocks=1 00:04:21.038 00:04:21.038 ' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.038 --rc genhtml_branch_coverage=1 00:04:21.038 --rc genhtml_function_coverage=1 00:04:21.038 --rc genhtml_legend=1 00:04:21.038 --rc geninfo_all_blocks=1 00:04:21.038 --rc geninfo_unexecuted_blocks=1 00:04:21.038 00:04:21.038 ' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.038 --rc genhtml_branch_coverage=1 00:04:21.038 --rc genhtml_function_coverage=1 00:04:21.038 --rc genhtml_legend=1 00:04:21.038 --rc geninfo_all_blocks=1 00:04:21.038 --rc geninfo_unexecuted_blocks=1 00:04:21.038 00:04:21.038 ' 00:04:21.038 09:42:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.038 09:42:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2463827 00:04:21.038 09:42:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2463827 00:04:21.038 09:42:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2463827 ']' 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.038 09:42:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.038 [2024-11-20 09:42:54.563300] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:21.038 [2024-11-20 09:42:54.563347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463827 ] 00:04:21.297 [2024-11-20 09:42:54.636562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.297 [2024-11-20 09:42:54.678388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.555 09:42:54 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.555 09:42:54 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:21.555 09:42:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:21.555 09:42:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2463827 00:04:21.555 09:42:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2463827 ']' 00:04:21.555 09:42:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2463827 00:04:21.555 09:42:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.555 09:42:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.555 09:42:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463827 00:04:21.813 09:42:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.813 09:42:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.813 09:42:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463827' 00:04:21.813 killing process with pid 2463827 00:04:21.813 09:42:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 2463827 00:04:21.813 09:42:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 2463827 00:04:22.072 00:04:22.072 real 0m1.135s 00:04:22.072 user 0m1.153s 00:04:22.072 sys 0m0.403s 00:04:22.072 09:42:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.072 09:42:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.072 ************************************ 00:04:22.072 END TEST alias_rpc 00:04:22.072 ************************************ 00:04:22.072 09:42:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:22.072 09:42:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.072 09:42:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.072 09:42:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.072 09:42:55 -- common/autotest_common.sh@10 -- # set +x 00:04:22.072 ************************************ 00:04:22.072 START TEST spdkcli_tcp 00:04:22.072 ************************************ 00:04:22.072 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.072 * Looking for test storage... 00:04:22.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:22.072 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.072 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.072 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.331 09:42:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.331 --rc genhtml_branch_coverage=1 00:04:22.331 --rc genhtml_function_coverage=1 00:04:22.331 --rc genhtml_legend=1 00:04:22.331 --rc geninfo_all_blocks=1 00:04:22.331 --rc geninfo_unexecuted_blocks=1 00:04:22.331 00:04:22.331 ' 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.331 --rc genhtml_branch_coverage=1 00:04:22.331 --rc genhtml_function_coverage=1 00:04:22.331 --rc genhtml_legend=1 00:04:22.331 --rc geninfo_all_blocks=1 00:04:22.331 --rc geninfo_unexecuted_blocks=1 00:04:22.331 00:04:22.331 ' 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.331 --rc genhtml_branch_coverage=1 00:04:22.331 --rc genhtml_function_coverage=1 00:04:22.331 --rc genhtml_legend=1 00:04:22.331 --rc geninfo_all_blocks=1 00:04:22.331 --rc geninfo_unexecuted_blocks=1 00:04:22.331 00:04:22.331 ' 00:04:22.331 09:42:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.331 --rc genhtml_branch_coverage=1 00:04:22.331 --rc genhtml_function_coverage=1 00:04:22.332 --rc genhtml_legend=1 00:04:22.332 --rc geninfo_all_blocks=1 00:04:22.332 --rc geninfo_unexecuted_blocks=1 00:04:22.332 00:04:22.332 ' 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2464118 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2464118 00:04:22.332 09:42:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2464118 ']' 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.332 09:42:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.332 [2024-11-20 09:42:55.770461] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:22.332 [2024-11-20 09:42:55.770510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464118 ] 00:04:22.332 [2024-11-20 09:42:55.845521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.332 [2024-11-20 09:42:55.886687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.332 [2024-11-20 09:42:55.886689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.265 09:42:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.265 09:42:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:23.265 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2464302 00:04:23.266 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:23.266 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.266 [ 00:04:23.266 "bdev_malloc_delete", 00:04:23.266 "bdev_malloc_create", 00:04:23.266 "bdev_null_resize", 00:04:23.266 "bdev_null_delete", 00:04:23.266 "bdev_null_create", 00:04:23.266 "bdev_nvme_cuse_unregister", 00:04:23.266 "bdev_nvme_cuse_register", 00:04:23.266 "bdev_opal_new_user", 00:04:23.266 "bdev_opal_set_lock_state", 00:04:23.266 "bdev_opal_delete", 00:04:23.266 "bdev_opal_get_info", 00:04:23.266 "bdev_opal_create", 00:04:23.266 "bdev_nvme_opal_revert", 00:04:23.266 "bdev_nvme_opal_init", 00:04:23.266 "bdev_nvme_send_cmd", 00:04:23.266 "bdev_nvme_set_keys", 00:04:23.266 "bdev_nvme_get_path_iostat", 00:04:23.266 "bdev_nvme_get_mdns_discovery_info", 00:04:23.266 "bdev_nvme_stop_mdns_discovery", 00:04:23.266 "bdev_nvme_start_mdns_discovery", 00:04:23.266 "bdev_nvme_set_multipath_policy", 00:04:23.266 "bdev_nvme_set_preferred_path", 00:04:23.266 "bdev_nvme_get_io_paths", 00:04:23.266 "bdev_nvme_remove_error_injection", 00:04:23.266 "bdev_nvme_add_error_injection", 00:04:23.266 "bdev_nvme_get_discovery_info", 00:04:23.266 "bdev_nvme_stop_discovery", 00:04:23.266 "bdev_nvme_start_discovery", 00:04:23.266 "bdev_nvme_get_controller_health_info", 00:04:23.266 "bdev_nvme_disable_controller", 00:04:23.266 "bdev_nvme_enable_controller", 00:04:23.266 "bdev_nvme_reset_controller", 00:04:23.266 "bdev_nvme_get_transport_statistics", 00:04:23.266 "bdev_nvme_apply_firmware", 00:04:23.266 "bdev_nvme_detach_controller", 00:04:23.266 "bdev_nvme_get_controllers", 00:04:23.266 "bdev_nvme_attach_controller", 00:04:23.266 "bdev_nvme_set_hotplug", 00:04:23.266 "bdev_nvme_set_options", 00:04:23.266 "bdev_passthru_delete", 00:04:23.266 "bdev_passthru_create", 00:04:23.266 "bdev_lvol_set_parent_bdev", 00:04:23.266 "bdev_lvol_set_parent", 00:04:23.266 "bdev_lvol_check_shallow_copy", 00:04:23.266 "bdev_lvol_start_shallow_copy", 00:04:23.266 "bdev_lvol_grow_lvstore", 00:04:23.266 "bdev_lvol_get_lvols", 00:04:23.266 "bdev_lvol_get_lvstores", 00:04:23.266 "bdev_lvol_delete", 00:04:23.266 "bdev_lvol_set_read_only", 00:04:23.266 "bdev_lvol_resize", 00:04:23.266 "bdev_lvol_decouple_parent", 00:04:23.266 "bdev_lvol_inflate", 00:04:23.266 "bdev_lvol_rename", 00:04:23.266 "bdev_lvol_clone_bdev", 00:04:23.266 "bdev_lvol_clone", 00:04:23.266 "bdev_lvol_snapshot", 00:04:23.266 "bdev_lvol_create", 00:04:23.266 "bdev_lvol_delete_lvstore", 00:04:23.266 "bdev_lvol_rename_lvstore", 00:04:23.266 "bdev_lvol_create_lvstore", 00:04:23.266 "bdev_raid_set_options", 00:04:23.266 "bdev_raid_remove_base_bdev", 00:04:23.266 "bdev_raid_add_base_bdev", 00:04:23.266 "bdev_raid_delete", 00:04:23.266 "bdev_raid_create", 00:04:23.266 "bdev_raid_get_bdevs", 00:04:23.266 "bdev_error_inject_error", 00:04:23.266 "bdev_error_delete", 00:04:23.266 "bdev_error_create", 00:04:23.266 "bdev_split_delete", 00:04:23.266 "bdev_split_create", 00:04:23.266 "bdev_delay_delete", 00:04:23.266 "bdev_delay_create", 00:04:23.266 "bdev_delay_update_latency", 00:04:23.266 "bdev_zone_block_delete", 00:04:23.266 "bdev_zone_block_create", 00:04:23.266 "blobfs_create", 00:04:23.266 "blobfs_detect", 00:04:23.266 "blobfs_set_cache_size", 00:04:23.266 "bdev_aio_delete", 00:04:23.266 "bdev_aio_rescan", 00:04:23.266 "bdev_aio_create", 00:04:23.266 "bdev_ftl_set_property", 00:04:23.266 "bdev_ftl_get_properties", 00:04:23.266 "bdev_ftl_get_stats", 00:04:23.266 "bdev_ftl_unmap", 00:04:23.266 "bdev_ftl_unload", 00:04:23.266 "bdev_ftl_delete", 00:04:23.266 "bdev_ftl_load", 00:04:23.266 "bdev_ftl_create", 00:04:23.266 "bdev_virtio_attach_controller", 00:04:23.266 "bdev_virtio_scsi_get_devices", 00:04:23.266 "bdev_virtio_detach_controller", 00:04:23.266 "bdev_virtio_blk_set_hotplug", 00:04:23.266 "bdev_iscsi_delete", 00:04:23.266 "bdev_iscsi_create", 00:04:23.266 "bdev_iscsi_set_options", 00:04:23.266 "accel_error_inject_error", 00:04:23.266 "ioat_scan_accel_module", 00:04:23.266 "dsa_scan_accel_module", 00:04:23.266 "iaa_scan_accel_module", 00:04:23.266 "vfu_virtio_create_fs_endpoint", 00:04:23.266 "vfu_virtio_create_scsi_endpoint", 00:04:23.266 "vfu_virtio_scsi_remove_target", 00:04:23.266 "vfu_virtio_scsi_add_target", 00:04:23.266 "vfu_virtio_create_blk_endpoint", 00:04:23.266 "vfu_virtio_delete_endpoint", 00:04:23.266 "keyring_file_remove_key", 00:04:23.266 "keyring_file_add_key", 00:04:23.266 "keyring_linux_set_options", 00:04:23.266 "fsdev_aio_delete", 00:04:23.266 "fsdev_aio_create", 00:04:23.266 "iscsi_get_histogram", 00:04:23.266 "iscsi_enable_histogram", 00:04:23.266 "iscsi_set_options", 00:04:23.266 "iscsi_get_auth_groups", 00:04:23.266 "iscsi_auth_group_remove_secret", 00:04:23.266 "iscsi_auth_group_add_secret", 00:04:23.266 "iscsi_delete_auth_group", 00:04:23.266 "iscsi_create_auth_group", 00:04:23.266 "iscsi_set_discovery_auth", 00:04:23.266 "iscsi_get_options", 00:04:23.266 "iscsi_target_node_request_logout", 00:04:23.266 "iscsi_target_node_set_redirect", 00:04:23.266 "iscsi_target_node_set_auth", 00:04:23.266 "iscsi_target_node_add_lun", 00:04:23.266 "iscsi_get_stats", 00:04:23.266 "iscsi_get_connections", 00:04:23.266 "iscsi_portal_group_set_auth", 00:04:23.266 "iscsi_start_portal_group", 00:04:23.266 "iscsi_delete_portal_group", 00:04:23.266 "iscsi_create_portal_group", 00:04:23.266 "iscsi_get_portal_groups", 00:04:23.266 "iscsi_delete_target_node", 00:04:23.266 "iscsi_target_node_remove_pg_ig_maps", 00:04:23.266 "iscsi_target_node_add_pg_ig_maps", 00:04:23.266 "iscsi_create_target_node", 00:04:23.266 "iscsi_get_target_nodes", 00:04:23.266 "iscsi_delete_initiator_group", 00:04:23.266 "iscsi_initiator_group_remove_initiators", 00:04:23.266 "iscsi_initiator_group_add_initiators", 00:04:23.266 "iscsi_create_initiator_group", 00:04:23.266 "iscsi_get_initiator_groups", 00:04:23.266 "nvmf_set_crdt", 00:04:23.266 "nvmf_set_config", 00:04:23.266 "nvmf_set_max_subsystems", 00:04:23.266 "nvmf_stop_mdns_prr", 00:04:23.266 "nvmf_publish_mdns_prr", 00:04:23.266 "nvmf_subsystem_get_listeners", 00:04:23.266 "nvmf_subsystem_get_qpairs", 00:04:23.266 "nvmf_subsystem_get_controllers", 00:04:23.266 "nvmf_get_stats", 00:04:23.266 "nvmf_get_transports", 00:04:23.266 "nvmf_create_transport", 00:04:23.266 "nvmf_get_targets", 00:04:23.266 "nvmf_delete_target", 00:04:23.266 "nvmf_create_target", 00:04:23.266 "nvmf_subsystem_allow_any_host", 00:04:23.266 "nvmf_subsystem_set_keys", 00:04:23.266 "nvmf_subsystem_remove_host", 00:04:23.266 "nvmf_subsystem_add_host", 00:04:23.266 "nvmf_ns_remove_host", 00:04:23.266 "nvmf_ns_add_host", 00:04:23.266 "nvmf_subsystem_remove_ns", 00:04:23.266 "nvmf_subsystem_set_ns_ana_group", 00:04:23.266 "nvmf_subsystem_add_ns", 00:04:23.266 "nvmf_subsystem_listener_set_ana_state", 00:04:23.266 "nvmf_discovery_get_referrals", 00:04:23.266 "nvmf_discovery_remove_referral", 00:04:23.266 "nvmf_discovery_add_referral", 00:04:23.266 "nvmf_subsystem_remove_listener", 00:04:23.266 "nvmf_subsystem_add_listener", 00:04:23.266 "nvmf_delete_subsystem", 00:04:23.266 "nvmf_create_subsystem", 00:04:23.266 "nvmf_get_subsystems", 00:04:23.266 "env_dpdk_get_mem_stats", 00:04:23.266 "nbd_get_disks", 00:04:23.266 "nbd_stop_disk", 00:04:23.267 "nbd_start_disk", 00:04:23.267 "ublk_recover_disk", 00:04:23.267 "ublk_get_disks", 00:04:23.267 "ublk_stop_disk", 00:04:23.267 "ublk_start_disk", 00:04:23.267 "ublk_destroy_target", 00:04:23.267 "ublk_create_target", 00:04:23.267 "virtio_blk_create_transport", 00:04:23.267 "virtio_blk_get_transports", 00:04:23.267 "vhost_controller_set_coalescing", 00:04:23.267 "vhost_get_controllers", 00:04:23.267 "vhost_delete_controller", 00:04:23.267 "vhost_create_blk_controller", 00:04:23.267 "vhost_scsi_controller_remove_target", 00:04:23.267 "vhost_scsi_controller_add_target", 00:04:23.267 "vhost_start_scsi_controller", 00:04:23.267 "vhost_create_scsi_controller", 00:04:23.267 "thread_set_cpumask", 00:04:23.267 "scheduler_set_options", 00:04:23.267 "framework_get_governor", 00:04:23.267 "framework_get_scheduler", 00:04:23.267 "framework_set_scheduler", 00:04:23.267 "framework_get_reactors", 00:04:23.267 "thread_get_io_channels", 00:04:23.267 "thread_get_pollers", 00:04:23.267 "thread_get_stats", 00:04:23.267 "framework_monitor_context_switch", 00:04:23.267 "spdk_kill_instance", 00:04:23.267 "log_enable_timestamps", 00:04:23.267 "log_get_flags", 00:04:23.267 "log_clear_flag", 00:04:23.267 "log_set_flag", 00:04:23.267 "log_get_level", 00:04:23.267 "log_set_level", 00:04:23.267 "log_get_print_level", 00:04:23.267 "log_set_print_level", 00:04:23.267 "framework_enable_cpumask_locks", 00:04:23.267 "framework_disable_cpumask_locks", 00:04:23.267 "framework_wait_init", 00:04:23.267 "framework_start_init", 00:04:23.267 "scsi_get_devices", 00:04:23.267 "bdev_get_histogram", 00:04:23.267 "bdev_enable_histogram", 00:04:23.267 "bdev_set_qos_limit", 00:04:23.267 "bdev_set_qd_sampling_period", 00:04:23.267 "bdev_get_bdevs", 00:04:23.267 "bdev_reset_iostat", 00:04:23.267 "bdev_get_iostat", 00:04:23.267 "bdev_examine", 00:04:23.267 "bdev_wait_for_examine", 00:04:23.267 "bdev_set_options", 00:04:23.267 "accel_get_stats", 00:04:23.267 "accel_set_options", 00:04:23.267 "accel_set_driver", 00:04:23.267 "accel_crypto_key_destroy", 00:04:23.267 "accel_crypto_keys_get", 00:04:23.267 "accel_crypto_key_create", 00:04:23.267 "accel_assign_opc", 00:04:23.267 "accel_get_module_info", 00:04:23.267 "accel_get_opc_assignments", 00:04:23.267 "vmd_rescan", 00:04:23.267 "vmd_remove_device", 00:04:23.267 "vmd_enable", 00:04:23.267 "sock_get_default_impl", 00:04:23.267 "sock_set_default_impl", 00:04:23.267 "sock_impl_set_options", 00:04:23.267 "sock_impl_get_options", 00:04:23.267 "iobuf_get_stats", 00:04:23.267 "iobuf_set_options", 00:04:23.267 "keyring_get_keys", 00:04:23.267 "vfu_tgt_set_base_path", 00:04:23.267 "framework_get_pci_devices", 00:04:23.267 "framework_get_config", 00:04:23.267 "framework_get_subsystems", 00:04:23.267 "fsdev_set_opts", 00:04:23.267 "fsdev_get_opts", 00:04:23.267 "trace_get_info", 00:04:23.267 "trace_get_tpoint_group_mask", 00:04:23.267 "trace_disable_tpoint_group", 00:04:23.267 "trace_enable_tpoint_group", 00:04:23.267 "trace_clear_tpoint_mask", 00:04:23.267 "trace_set_tpoint_mask", 00:04:23.267 "notify_get_notifications", 00:04:23.267 "notify_get_types", 00:04:23.267 "spdk_get_version", 00:04:23.267 "rpc_get_methods" 00:04:23.267 ] 00:04:23.267 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:23.267 09:42:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.267 09:42:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.525 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.525 09:42:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2464118 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2464118 ']' 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2464118 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464118 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464118' 00:04:23.525 killing process with pid 2464118 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2464118 00:04:23.525 09:42:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2464118 00:04:23.783 00:04:23.783 real 0m1.660s 00:04:23.783 user 0m3.102s 00:04:23.783 sys 0m0.485s 00:04:23.783 09:42:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.783 09:42:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.783 ************************************ 00:04:23.783 END TEST spdkcli_tcp 00:04:23.783 ************************************ 00:04:23.783 09:42:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.783 09:42:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.783 09:42:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.783 09:42:57 -- common/autotest_common.sh@10 -- # set +x 00:04:23.783 ************************************ 00:04:23.783 START TEST dpdk_mem_utility 00:04:23.783 ************************************ 00:04:23.783 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.783 * Looking for test storage... 00:04:23.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:23.783 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.783 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.783 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.041 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.041 09:42:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.042 09:42:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.042 --rc genhtml_branch_coverage=1 00:04:24.042 --rc genhtml_function_coverage=1 00:04:24.042 --rc genhtml_legend=1 00:04:24.042 --rc geninfo_all_blocks=1 00:04:24.042 --rc geninfo_unexecuted_blocks=1 00:04:24.042 00:04:24.042 ' 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.042 --rc genhtml_branch_coverage=1 00:04:24.042 --rc genhtml_function_coverage=1 00:04:24.042 --rc genhtml_legend=1 00:04:24.042 --rc geninfo_all_blocks=1 00:04:24.042 --rc geninfo_unexecuted_blocks=1 00:04:24.042 00:04:24.042 ' 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.042 --rc genhtml_branch_coverage=1 00:04:24.042 --rc genhtml_function_coverage=1 00:04:24.042 --rc genhtml_legend=1 00:04:24.042 --rc geninfo_all_blocks=1 00:04:24.042 --rc geninfo_unexecuted_blocks=1 00:04:24.042 00:04:24.042 ' 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.042 --rc genhtml_branch_coverage=1 00:04:24.042 --rc genhtml_function_coverage=1 00:04:24.042 --rc genhtml_legend=1 00:04:24.042 --rc geninfo_all_blocks=1 00:04:24.042 --rc geninfo_unexecuted_blocks=1 00:04:24.042 00:04:24.042 ' 00:04:24.042 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.042 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2464433 00:04:24.042 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2464433 00:04:24.042 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2464433 ']' 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.042 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.042 [2024-11-20 09:42:57.493922] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:24.042 [2024-11-20 09:42:57.493972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464433 ] 00:04:24.042 [2024-11-20 09:42:57.564444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.042 [2024-11-20 09:42:57.603902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.300 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.300 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:24.300 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:24.300 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:24.300 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.300 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.300 { 00:04:24.300 "filename": "/tmp/spdk_mem_dump.txt" 00:04:24.300 } 00:04:24.300 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.300 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.300 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:24.300 1 heaps totaling size 810.000000 MiB 00:04:24.300 size: 810.000000 MiB heap id: 0 00:04:24.300 end heaps---------- 00:04:24.300 9 mempools totaling size 595.772034 MiB 00:04:24.301 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.301 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.301 size: 92.545471 MiB name: bdev_io_2464433 00:04:24.301 size: 50.003479 MiB name: msgpool_2464433 00:04:24.301 size: 36.509338 MiB name: fsdev_io_2464433 00:04:24.301 size: 21.763794 MiB name: PDU_Pool 00:04:24.301 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.301 size: 4.133484 MiB name: evtpool_2464433 00:04:24.301 size: 0.026123 MiB name: Session_Pool 00:04:24.301 end mempools------- 00:04:24.301 6 memzones totaling size 4.142822 MiB 00:04:24.301 size: 1.000366 MiB name: RG_ring_0_2464433 00:04:24.301 size: 1.000366 MiB name: RG_ring_1_2464433 00:04:24.301 size: 1.000366 MiB name: RG_ring_4_2464433 00:04:24.301 size: 1.000366 MiB name: RG_ring_5_2464433 00:04:24.301 size: 0.125366 MiB name: RG_ring_2_2464433 00:04:24.301 size: 0.015991 MiB name: RG_ring_3_2464433 00:04:24.301 end memzones------- 00:04:24.301 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.560 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:24.560 list of free elements. size: 10.862488 MiB 00:04:24.560 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:24.560 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:24.560 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:24.560 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:24.560 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:24.560 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:24.560 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:24.560 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:24.560 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:24.560 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:24.560 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:24.560 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:24.560 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:24.560 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:24.560 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:24.560 list of standard malloc elements. size: 199.218628 MiB 00:04:24.560 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:24.560 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:24.560 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:24.560 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:24.560 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:24.560 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.560 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:24.560 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.560 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:24.560 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:24.560 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:24.560 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:24.560 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:24.560 list of memzone associated elements. size: 599.918884 MiB 00:04:24.560 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:24.561 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.561 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:24.561 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.561 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:24.561 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2464433_0 00:04:24.561 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:24.561 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2464433_0 00:04:24.561 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:24.561 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2464433_0 00:04:24.561 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:24.561 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.561 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:24.561 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.561 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:24.561 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2464433_0 00:04:24.561 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:24.561 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2464433 00:04:24.561 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.561 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2464433 00:04:24.561 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:24.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.561 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:24.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.561 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:24.561 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.561 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:24.561 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.561 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:24.561 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2464433 00:04:24.561 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:24.561 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2464433 00:04:24.561 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:24.561 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2464433 00:04:24.561 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:24.561 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2464433 00:04:24.561 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:24.561 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2464433 00:04:24.561 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:24.561 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2464433 00:04:24.561 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:24.561 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.561 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:24.561 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.561 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:24.561 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.561 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:24.561 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2464433 00:04:24.561 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:24.561 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2464433 00:04:24.561 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:24.561 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.561 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:24.561 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.561 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:24.561 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2464433 00:04:24.561 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:24.561 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.561 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:24.561 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2464433 00:04:24.561 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:24.561 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2464433 00:04:24.561 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:24.561 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2464433 00:04:24.561 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:24.561 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.561 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.561 09:42:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2464433 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2464433 ']' 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2464433 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464433 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464433' 00:04:24.561 killing process with pid 2464433 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2464433 00:04:24.561 09:42:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2464433 00:04:24.820 00:04:24.820 real 0m1.013s 00:04:24.820 user 0m0.961s 00:04:24.820 sys 0m0.400s 00:04:24.820 09:42:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.820 09:42:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.820 ************************************ 00:04:24.820 END TEST dpdk_mem_utility 00:04:24.820 ************************************ 00:04:24.820 09:42:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:24.820 09:42:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.820 09:42:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.820 09:42:58 -- common/autotest_common.sh@10 -- # set +x 00:04:24.820 ************************************ 00:04:24.820 START TEST event 00:04:24.820 ************************************ 00:04:24.820 09:42:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.078 * Looking for test storage... 00:04:25.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.078 09:42:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.078 09:42:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.078 09:42:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.078 09:42:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.078 09:42:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.078 09:42:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.078 09:42:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.078 09:42:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.078 09:42:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.078 09:42:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.078 09:42:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.078 09:42:58 event -- scripts/common.sh@344 -- # case "$op" in 00:04:25.078 09:42:58 event -- scripts/common.sh@345 -- # : 1 00:04:25.078 09:42:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.078 09:42:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.078 09:42:58 event -- scripts/common.sh@365 -- # decimal 1 00:04:25.078 09:42:58 event -- scripts/common.sh@353 -- # local d=1 00:04:25.078 09:42:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.078 09:42:58 event -- scripts/common.sh@355 -- # echo 1 00:04:25.078 09:42:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.078 09:42:58 event -- scripts/common.sh@366 -- # decimal 2 00:04:25.078 09:42:58 event -- scripts/common.sh@353 -- # local d=2 00:04:25.078 09:42:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.078 09:42:58 event -- scripts/common.sh@355 -- # echo 2 00:04:25.078 09:42:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.078 09:42:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.078 09:42:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.078 09:42:58 event -- scripts/common.sh@368 -- # return 0 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.078 --rc genhtml_branch_coverage=1 00:04:25.078 --rc genhtml_function_coverage=1 00:04:25.078 --rc genhtml_legend=1 00:04:25.078 --rc geninfo_all_blocks=1 00:04:25.078 --rc geninfo_unexecuted_blocks=1 00:04:25.078 00:04:25.078 ' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.078 --rc genhtml_branch_coverage=1 00:04:25.078 --rc genhtml_function_coverage=1 00:04:25.078 --rc genhtml_legend=1 00:04:25.078 --rc geninfo_all_blocks=1 00:04:25.078 --rc geninfo_unexecuted_blocks=1 00:04:25.078 00:04:25.078 ' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.078 --rc genhtml_branch_coverage=1 00:04:25.078 --rc genhtml_function_coverage=1 00:04:25.078 --rc genhtml_legend=1 00:04:25.078 --rc geninfo_all_blocks=1 00:04:25.078 --rc geninfo_unexecuted_blocks=1 00:04:25.078 00:04:25.078 ' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.078 --rc genhtml_branch_coverage=1 00:04:25.078 --rc genhtml_function_coverage=1 00:04:25.078 --rc genhtml_legend=1 00:04:25.078 --rc geninfo_all_blocks=1 00:04:25.078 --rc geninfo_unexecuted_blocks=1 00:04:25.078 00:04:25.078 ' 00:04:25.078 09:42:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.078 09:42:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.078 09:42:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:25.078 09:42:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.078 09:42:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.078 ************************************ 00:04:25.078 START TEST event_perf 00:04:25.078 ************************************ 00:04:25.078 09:42:58 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.078 Running I/O for 1 seconds...[2024-11-20 09:42:58.571885] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:25.078 [2024-11-20 09:42:58.571946] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464723 ] 00:04:25.078 [2024-11-20 09:42:58.649999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.336 [2024-11-20 09:42:58.694055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.336 [2024-11-20 09:42:58.694165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.336 [2024-11-20 09:42:58.694271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.336 [2024-11-20 09:42:58.694271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.267 Running I/O for 1 seconds... 00:04:26.267 lcore 0: 208003 00:04:26.267 lcore 1: 208003 00:04:26.268 lcore 2: 208004 00:04:26.268 lcore 3: 208004 00:04:26.268 done. 00:04:26.268 00:04:26.268 real 0m1.183s 00:04:26.268 user 0m4.102s 00:04:26.268 sys 0m0.077s 00:04:26.268 09:42:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.268 09:42:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.268 ************************************ 00:04:26.268 END TEST event_perf 00:04:26.268 ************************************ 00:04:26.268 09:42:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.268 09:42:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:26.268 09:42:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.268 09:42:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.268 ************************************ 00:04:26.268 START TEST event_reactor 00:04:26.268 ************************************ 00:04:26.268 09:42:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.268 [2024-11-20 09:42:59.821532] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:26.268 [2024-11-20 09:42:59.821601] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464975 ] 00:04:26.526 [2024-11-20 09:42:59.897906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.526 [2024-11-20 09:42:59.938528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.460 test_start 00:04:27.460 oneshot 00:04:27.460 tick 100 00:04:27.460 tick 100 00:04:27.460 tick 250 00:04:27.460 tick 100 00:04:27.460 tick 100 00:04:27.460 tick 100 00:04:27.460 tick 250 00:04:27.460 tick 500 00:04:27.460 tick 100 00:04:27.460 tick 100 00:04:27.460 tick 250 00:04:27.460 tick 100 00:04:27.460 tick 100 00:04:27.460 test_end 00:04:27.460 00:04:27.460 real 0m1.177s 00:04:27.460 user 0m1.095s 00:04:27.460 sys 0m0.078s 00:04:27.460 09:43:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.460 09:43:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:27.460 ************************************ 00:04:27.460 END TEST event_reactor 00:04:27.460 ************************************ 00:04:27.460 09:43:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.460 09:43:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:27.460 09:43:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.460 09:43:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.718 ************************************ 00:04:27.718 START TEST event_reactor_perf 00:04:27.718 ************************************ 00:04:27.718 09:43:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.718 [2024-11-20 09:43:01.070391] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:27.718 [2024-11-20 09:43:01.070462] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465221 ] 00:04:27.718 [2024-11-20 09:43:01.150314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.718 [2024-11-20 09:43:01.189519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.652 test_start 00:04:28.652 test_end 00:04:28.652 Performance: 518432 events per second 00:04:28.652 00:04:28.652 real 0m1.179s 00:04:28.652 user 0m1.094s 00:04:28.652 sys 0m0.081s 00:04:28.652 09:43:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.652 09:43:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.652 ************************************ 00:04:28.652 END TEST event_reactor_perf 00:04:28.652 ************************************ 00:04:28.911 09:43:02 event -- event/event.sh@49 -- # uname -s 00:04:28.911 09:43:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:28.911 09:43:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:28.911 09:43:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.911 09:43:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.911 09:43:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.911 ************************************ 00:04:28.911 START TEST event_scheduler 00:04:28.911 ************************************ 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:28.911 * Looking for test storage... 00:04:28.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.911 09:43:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.911 --rc genhtml_branch_coverage=1 00:04:28.911 --rc genhtml_function_coverage=1 00:04:28.911 --rc genhtml_legend=1 00:04:28.911 --rc geninfo_all_blocks=1 00:04:28.911 --rc geninfo_unexecuted_blocks=1 00:04:28.911 00:04:28.911 ' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.911 --rc genhtml_branch_coverage=1 00:04:28.911 --rc genhtml_function_coverage=1 00:04:28.911 --rc genhtml_legend=1 00:04:28.911 --rc geninfo_all_blocks=1 00:04:28.911 --rc geninfo_unexecuted_blocks=1 00:04:28.911 00:04:28.911 ' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.911 --rc genhtml_branch_coverage=1 00:04:28.911 --rc genhtml_function_coverage=1 00:04:28.911 --rc genhtml_legend=1 00:04:28.911 --rc geninfo_all_blocks=1 00:04:28.911 --rc geninfo_unexecuted_blocks=1 00:04:28.911 00:04:28.911 ' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.911 --rc genhtml_branch_coverage=1 00:04:28.911 --rc genhtml_function_coverage=1 00:04:28.911 --rc genhtml_legend=1 00:04:28.911 --rc geninfo_all_blocks=1 00:04:28.911 --rc geninfo_unexecuted_blocks=1 00:04:28.911 00:04:28.911 ' 00:04:28.911 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:28.911 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2465514 00:04:28.911 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.911 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:28.911 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2465514 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2465514 ']' 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.911 09:43:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.171 [2024-11-20 09:43:02.518207] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:29.171 [2024-11-20 09:43:02.518254] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465514 ] 00:04:29.171 [2024-11-20 09:43:02.591848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.171 [2024-11-20 09:43:02.634506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.171 [2024-11-20 09:43:02.634613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.171 [2024-11-20 09:43:02.634717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.171 [2024-11-20 09:43:02.634718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:29.171 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.171 [2024-11-20 09:43:02.679318] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:29.171 [2024-11-20 09:43:02.679337] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:29.171 [2024-11-20 09:43:02.679347] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:29.171 [2024-11-20 09:43:02.679353] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:29.171 [2024-11-20 09:43:02.679358] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.171 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.171 09:43:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 [2024-11-20 09:43:02.757041] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.430 09:43:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.430 09:43:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.430 09:43:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 ************************************ 00:04:29.430 START TEST scheduler_create_thread 00:04:29.430 ************************************ 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 2 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 3 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 4 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 5 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 6 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 7 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 8 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 9 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 10 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.430 09:43:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.997 09:43:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.997 09:43:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:29.997 09:43:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.997 09:43:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.372 09:43:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.372 09:43:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.372 09:43:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.372 09:43:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.372 09:43:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.747 09:43:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.747 00:04:32.747 real 0m3.100s 00:04:32.747 user 0m0.024s 00:04:32.747 sys 0m0.006s 00:04:32.747 09:43:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.747 09:43:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.747 ************************************ 00:04:32.747 END TEST scheduler_create_thread 00:04:32.747 ************************************ 00:04:32.747 09:43:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:32.747 09:43:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2465514 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2465514 ']' 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2465514 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465514 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465514' 00:04:32.747 killing process with pid 2465514 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2465514 00:04:32.747 09:43:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2465514 00:04:32.747 [2024-11-20 09:43:06.276163] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:33.006 00:04:33.006 real 0m4.157s 00:04:33.006 user 0m6.639s 00:04:33.006 sys 0m0.383s 00:04:33.006 09:43:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.006 09:43:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.006 ************************************ 00:04:33.006 END TEST event_scheduler 00:04:33.006 ************************************ 00:04:33.006 09:43:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:33.006 09:43:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:33.006 09:43:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.006 09:43:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.006 09:43:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.006 ************************************ 00:04:33.006 START TEST app_repeat 00:04:33.006 ************************************ 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2466252 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2466252' 00:04:33.006 Process app_repeat pid: 2466252 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.006 spdk_app_start Round 0 00:04:33.006 09:43:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2466252 /var/tmp/spdk-nbd.sock 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2466252 ']' 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.006 09:43:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.006 [2024-11-20 09:43:06.575646] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:33.006 [2024-11-20 09:43:06.575696] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466252 ] 00:04:33.265 [2024-11-20 09:43:06.652218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.265 [2024-11-20 09:43:06.692637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.265 [2024-11-20 09:43:06.692637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.265 09:43:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.265 09:43:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.265 09:43:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.523 Malloc0 00:04:33.523 09:43:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.781 Malloc1 00:04:33.781 09:43:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.781 09:43:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.040 /dev/nbd0 00:04:34.040 09:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.040 09:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.040 1+0 records in 00:04:34.040 1+0 records out 00:04:34.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225406 s, 18.2 MB/s 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.040 09:43:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.040 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.040 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.040 09:43:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.299 /dev/nbd1 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.299 1+0 records in 00:04:34.299 1+0 records out 00:04:34.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232123 s, 17.6 MB/s 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.299 09:43:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.299 09:43:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.558 { 00:04:34.558 "nbd_device": "/dev/nbd0", 00:04:34.558 "bdev_name": "Malloc0" 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "nbd_device": "/dev/nbd1", 00:04:34.558 "bdev_name": "Malloc1" 00:04:34.558 } 00:04:34.558 ]' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.558 { 00:04:34.558 "nbd_device": "/dev/nbd0", 00:04:34.558 "bdev_name": "Malloc0" 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "nbd_device": "/dev/nbd1", 00:04:34.558 "bdev_name": "Malloc1" 00:04:34.558 } 00:04:34.558 ]' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.558 /dev/nbd1' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.558 /dev/nbd1' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.558 256+0 records in 00:04:34.558 256+0 records out 00:04:34.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100692 s, 104 MB/s 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.558 256+0 records in 00:04:34.558 256+0 records out 00:04:34.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138338 s, 75.8 MB/s 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.558 256+0 records in 00:04:34.558 256+0 records out 00:04:34.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146933 s, 71.4 MB/s 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.558 09:43:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.558 09:43:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.817 09:43:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.076 09:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.335 09:43:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.335 09:43:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.335 09:43:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.593 [2024-11-20 09:43:09.048795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.593 [2024-11-20 09:43:09.085694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.593 [2024-11-20 09:43:09.085695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.593 [2024-11-20 09:43:09.126004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.593 [2024-11-20 09:43:09.126061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.874 09:43:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.874 09:43:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.874 spdk_app_start Round 1 00:04:38.874 09:43:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2466252 /var/tmp/spdk-nbd.sock 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2466252 ']' 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.874 09:43:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 09:43:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.874 09:43:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.874 09:43:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.874 Malloc0 00:04:38.874 09:43:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.133 Malloc1 00:04:39.133 09:43:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.133 09:43:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.390 /dev/nbd0 00:04:39.390 09:43:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.390 09:43:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.390 1+0 records in 00:04:39.390 1+0 records out 00:04:39.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179116 s, 22.9 MB/s 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.390 09:43:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.390 09:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.390 09:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.390 09:43:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.647 /dev/nbd1 00:04:39.647 09:43:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.647 09:43:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.647 09:43:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.648 1+0 records in 00:04:39.648 1+0 records out 00:04:39.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232469 s, 17.6 MB/s 00:04:39.648 09:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.648 09:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.648 09:43:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.648 09:43:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.648 09:43:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.648 09:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.648 09:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.648 09:43:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.648 09:43:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.648 09:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.905 { 00:04:39.905 "nbd_device": "/dev/nbd0", 00:04:39.905 "bdev_name": "Malloc0" 00:04:39.905 }, 00:04:39.905 { 00:04:39.905 "nbd_device": "/dev/nbd1", 00:04:39.905 "bdev_name": "Malloc1" 00:04:39.905 } 00:04:39.905 ]' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.905 { 00:04:39.905 "nbd_device": "/dev/nbd0", 00:04:39.905 "bdev_name": "Malloc0" 00:04:39.905 }, 00:04:39.905 { 00:04:39.905 "nbd_device": "/dev/nbd1", 00:04:39.905 "bdev_name": "Malloc1" 00:04:39.905 } 00:04:39.905 ]' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.905 /dev/nbd1' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.905 /dev/nbd1' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.905 09:43:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.906 256+0 records in 00:04:39.906 256+0 records out 00:04:39.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010062 s, 104 MB/s 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.906 256+0 records in 00:04:39.906 256+0 records out 00:04:39.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137919 s, 76.0 MB/s 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.906 256+0 records in 00:04:39.906 256+0 records out 00:04:39.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014662 s, 71.5 MB/s 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.906 09:43:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.163 09:43:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.421 09:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.679 09:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.680 09:43:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.680 09:43:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.680 09:43:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.938 [2024-11-20 09:43:14.366678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.938 [2024-11-20 09:43:14.404582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.938 [2024-11-20 09:43:14.404584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.938 [2024-11-20 09:43:14.445995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.938 [2024-11-20 09:43:14.446036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.218 09:43:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.218 09:43:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.218 spdk_app_start Round 2 00:04:44.218 09:43:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2466252 /var/tmp/spdk-nbd.sock 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2466252 ']' 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.218 09:43:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.218 09:43:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.218 Malloc0 00:04:44.218 09:43:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.476 Malloc1 00:04:44.476 09:43:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.476 09:43:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.476 /dev/nbd0 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.734 1+0 records in 00:04:44.734 1+0 records out 00:04:44.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235864 s, 17.4 MB/s 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.734 /dev/nbd1 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.734 09:43:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.734 1+0 records in 00:04:44.734 1+0 records out 00:04:44.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231893 s, 17.7 MB/s 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.734 09:43:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.992 09:43:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.992 09:43:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.992 { 00:04:44.992 "nbd_device": "/dev/nbd0", 00:04:44.992 "bdev_name": "Malloc0" 00:04:44.992 }, 00:04:44.992 { 00:04:44.992 "nbd_device": "/dev/nbd1", 00:04:44.992 "bdev_name": "Malloc1" 00:04:44.992 } 00:04:44.992 ]' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.992 { 00:04:44.992 "nbd_device": "/dev/nbd0", 00:04:44.992 "bdev_name": "Malloc0" 00:04:44.992 }, 00:04:44.992 { 00:04:44.992 "nbd_device": "/dev/nbd1", 00:04:44.992 "bdev_name": "Malloc1" 00:04:44.992 } 00:04:44.992 ]' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.992 /dev/nbd1' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.992 /dev/nbd1' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.992 09:43:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.249 256+0 records in 00:04:45.249 256+0 records out 00:04:45.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106372 s, 98.6 MB/s 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.249 256+0 records in 00:04:45.249 256+0 records out 00:04:45.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129483 s, 81.0 MB/s 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.249 256+0 records in 00:04:45.249 256+0 records out 00:04:45.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143427 s, 73.1 MB/s 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.249 09:43:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.506 09:43:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.507 09:43:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.507 09:43:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.507 09:43:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.764 09:43:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.764 09:43:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.022 09:43:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.280 [2024-11-20 09:43:19.654049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.280 [2024-11-20 09:43:19.690452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.280 [2024-11-20 09:43:19.690453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.280 [2024-11-20 09:43:19.730424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.280 [2024-11-20 09:43:19.730464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.563 09:43:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2466252 /var/tmp/spdk-nbd.sock 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2466252 ']' 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:49.563 09:43:22 event.app_repeat -- event/event.sh@39 -- # killprocess 2466252 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2466252 ']' 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2466252 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466252 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466252' 00:04:49.563 killing process with pid 2466252 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2466252 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2466252 00:04:49.563 spdk_app_start is called in Round 0. 00:04:49.563 Shutdown signal received, stop current app iteration 00:04:49.563 Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 reinitialization... 00:04:49.563 spdk_app_start is called in Round 1. 00:04:49.563 Shutdown signal received, stop current app iteration 00:04:49.563 Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 reinitialization... 00:04:49.563 spdk_app_start is called in Round 2. 00:04:49.563 Shutdown signal received, stop current app iteration 00:04:49.563 Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 reinitialization... 00:04:49.563 spdk_app_start is called in Round 3. 00:04:49.563 Shutdown signal received, stop current app iteration 00:04:49.563 09:43:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:49.563 09:43:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:49.563 00:04:49.563 real 0m16.377s 00:04:49.563 user 0m36.004s 00:04:49.563 sys 0m2.525s 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.563 09:43:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 ************************************ 00:04:49.563 END TEST app_repeat 00:04:49.563 ************************************ 00:04:49.563 09:43:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:49.563 09:43:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:49.563 09:43:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.563 09:43:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.563 09:43:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 ************************************ 00:04:49.563 START TEST cpu_locks 00:04:49.563 ************************************ 00:04:49.563 09:43:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:49.563 * Looking for test storage... 00:04:49.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:49.563 09:43:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.563 09:43:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.563 09:43:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.563 09:43:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.563 09:43:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.563 09:43:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.822 09:43:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.822 09:43:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.823 09:43:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.823 --rc genhtml_branch_coverage=1 00:04:49.823 --rc genhtml_function_coverage=1 00:04:49.823 --rc genhtml_legend=1 00:04:49.823 --rc geninfo_all_blocks=1 00:04:49.823 --rc geninfo_unexecuted_blocks=1 00:04:49.823 00:04:49.823 ' 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.823 --rc genhtml_branch_coverage=1 00:04:49.823 --rc genhtml_function_coverage=1 00:04:49.823 --rc genhtml_legend=1 00:04:49.823 --rc geninfo_all_blocks=1 00:04:49.823 --rc geninfo_unexecuted_blocks=1 00:04:49.823 00:04:49.823 ' 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.823 --rc genhtml_branch_coverage=1 00:04:49.823 --rc genhtml_function_coverage=1 00:04:49.823 --rc genhtml_legend=1 00:04:49.823 --rc geninfo_all_blocks=1 00:04:49.823 --rc geninfo_unexecuted_blocks=1 00:04:49.823 00:04:49.823 ' 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.823 --rc genhtml_branch_coverage=1 00:04:49.823 --rc genhtml_function_coverage=1 00:04:49.823 --rc genhtml_legend=1 00:04:49.823 --rc geninfo_all_blocks=1 00:04:49.823 --rc geninfo_unexecuted_blocks=1 00:04:49.823 00:04:49.823 ' 00:04:49.823 09:43:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:49.823 09:43:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:49.823 09:43:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:49.823 09:43:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.823 09:43:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.823 ************************************ 00:04:49.823 START TEST default_locks 00:04:49.823 ************************************ 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2469251 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2469251 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2469251 ']' 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.823 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.823 [2024-11-20 09:43:23.241831] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:49.823 [2024-11-20 09:43:23.241874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469251 ] 00:04:49.823 [2024-11-20 09:43:23.314232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.823 [2024-11-20 09:43:23.353646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.081 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.081 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:50.081 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2469251 00:04:50.081 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2469251 00:04:50.081 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.647 lslocks: write error 00:04:50.647 09:43:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2469251 00:04:50.648 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2469251 ']' 00:04:50.648 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2469251 00:04:50.648 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:50.648 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.648 09:43:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469251 00:04:50.648 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.648 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.648 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469251' 00:04:50.648 killing process with pid 2469251 00:04:50.648 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2469251 00:04:50.648 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2469251 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2469251 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2469251 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2469251 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2469251 ']' 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2469251) - No such process 00:04:50.907 ERROR: process (pid: 2469251) is no longer running 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.907 00:04:50.907 real 0m1.125s 00:04:50.907 user 0m1.082s 00:04:50.907 sys 0m0.510s 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.907 09:43:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 ************************************ 00:04:50.907 END TEST default_locks 00:04:50.907 ************************************ 00:04:50.907 09:43:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:50.907 09:43:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.907 09:43:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.907 09:43:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 ************************************ 00:04:50.907 START TEST default_locks_via_rpc 00:04:50.907 ************************************ 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2469510 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2469510 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2469510 ']' 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.907 09:43:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 [2024-11-20 09:43:24.437335] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:50.907 [2024-11-20 09:43:24.437379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469510 ] 00:04:51.166 [2024-11-20 09:43:24.511356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.166 [2024-11-20 09:43:24.550491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2469510 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2469510 00:04:51.731 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2469510 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2469510 ']' 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2469510 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469510 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469510' 00:04:52.297 killing process with pid 2469510 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2469510 00:04:52.297 09:43:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2469510 00:04:52.864 00:04:52.864 real 0m1.788s 00:04:52.864 user 0m1.902s 00:04:52.864 sys 0m0.587s 00:04:52.864 09:43:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.864 09:43:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 END TEST default_locks_via_rpc 00:04:52.864 ************************************ 00:04:52.864 09:43:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:52.864 09:43:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.864 09:43:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.864 09:43:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 START TEST non_locking_app_on_locked_coremask 00:04:52.864 ************************************ 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2469772 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2469772 /var/tmp/spdk.sock 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2469772 ']' 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.864 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 [2024-11-20 09:43:26.287720] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:52.864 [2024-11-20 09:43:26.287758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469772 ] 00:04:52.864 [2024-11-20 09:43:26.362954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.864 [2024-11-20 09:43:26.405087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2469786 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2469786 /var/tmp/spdk2.sock 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2469786 ']' 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.122 09:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.122 [2024-11-20 09:43:26.671010] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:53.122 [2024-11-20 09:43:26.671058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469786 ] 00:04:53.380 [2024-11-20 09:43:26.756968] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.380 [2024-11-20 09:43:26.756991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.380 [2024-11-20 09:43:26.837767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.946 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.946 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.946 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2469772 00:04:53.946 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2469772 00:04:53.946 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.511 lslocks: write error 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2469772 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2469772 ']' 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2469772 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469772 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469772' 00:04:54.511 killing process with pid 2469772 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2469772 00:04:54.511 09:43:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2469772 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2469786 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2469786 ']' 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2469786 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469786 00:04:55.078 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.079 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.079 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469786' 00:04:55.079 killing process with pid 2469786 00:04:55.079 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2469786 00:04:55.079 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2469786 00:04:55.644 00:04:55.644 real 0m2.692s 00:04:55.644 user 0m2.838s 00:04:55.644 sys 0m0.882s 00:04:55.644 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.644 09:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 ************************************ 00:04:55.644 END TEST non_locking_app_on_locked_coremask 00:04:55.644 ************************************ 00:04:55.644 09:43:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:55.644 09:43:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.644 09:43:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.644 09:43:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.644 ************************************ 00:04:55.644 START TEST locking_app_on_unlocked_coremask 00:04:55.644 ************************************ 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2470270 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2470270 /var/tmp/spdk.sock 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2470270 ']' 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.644 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.645 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.645 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.645 09:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.645 [2024-11-20 09:43:29.046726] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:55.645 [2024-11-20 09:43:29.046766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470270 ] 00:04:55.645 [2024-11-20 09:43:29.121859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.645 [2024-11-20 09:43:29.121884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.645 [2024-11-20 09:43:29.163616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2470283 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2470283 /var/tmp/spdk2.sock 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2470283 ']' 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.902 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.902 [2024-11-20 09:43:29.440271] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:55.902 [2024-11-20 09:43:29.440321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470283 ] 00:04:56.160 [2024-11-20 09:43:29.531566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.160 [2024-11-20 09:43:29.619940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.726 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.726 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.726 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2470283 00:04:56.726 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.726 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2470283 00:04:57.660 lslocks: write error 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2470270 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2470270 ']' 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2470270 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470270 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470270' 00:04:57.660 killing process with pid 2470270 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2470270 00:04:57.660 09:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2470270 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2470283 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2470283 ']' 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2470283 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470283 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470283' 00:04:58.227 killing process with pid 2470283 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2470283 00:04:58.227 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2470283 00:04:58.484 00:04:58.484 real 0m2.891s 00:04:58.484 user 0m3.036s 00:04:58.484 sys 0m0.974s 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.484 ************************************ 00:04:58.484 END TEST locking_app_on_unlocked_coremask 00:04:58.484 ************************************ 00:04:58.484 09:43:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:58.484 09:43:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.484 09:43:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.484 09:43:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.484 ************************************ 00:04:58.484 START TEST locking_app_on_locked_coremask 00:04:58.484 ************************************ 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2470771 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2470771 /var/tmp/spdk.sock 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2470771 ']' 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.484 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.484 [2024-11-20 09:43:32.006698] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:58.484 [2024-11-20 09:43:32.006737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470771 ] 00:04:58.743 [2024-11-20 09:43:32.080604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.743 [2024-11-20 09:43:32.122522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2470886 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2470886 /var/tmp/spdk2.sock 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2470886 /var/tmp/spdk2.sock 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2470886 /var/tmp/spdk2.sock 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2470886 ']' 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.001 09:43:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.001 [2024-11-20 09:43:32.399883] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:04:59.001 [2024-11-20 09:43:32.399929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470886 ] 00:04:59.001 [2024-11-20 09:43:32.488323] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2470771 has claimed it. 00:04:59.001 [2024-11-20 09:43:32.488358] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2470886) - No such process 00:04:59.567 ERROR: process (pid: 2470886) is no longer running 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2470771 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2470771 00:04:59.567 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.133 lslocks: write error 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2470771 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2470771 ']' 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2470771 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470771 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470771' 00:05:00.133 killing process with pid 2470771 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2470771 00:05:00.133 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2470771 00:05:00.392 00:05:00.392 real 0m1.849s 00:05:00.392 user 0m1.983s 00:05:00.392 sys 0m0.607s 00:05:00.392 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.392 09:43:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.392 ************************************ 00:05:00.392 END TEST locking_app_on_locked_coremask 00:05:00.392 ************************************ 00:05:00.392 09:43:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:00.392 09:43:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.392 09:43:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.392 09:43:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.392 ************************************ 00:05:00.392 START TEST locking_overlapped_coremask 00:05:00.392 ************************************ 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2471245 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2471245 /var/tmp/spdk.sock 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2471245 ']' 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.392 09:43:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.392 [2024-11-20 09:43:33.928252] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:00.392 [2024-11-20 09:43:33.928296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471245 ] 00:05:00.651 [2024-11-20 09:43:34.004165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.651 [2024-11-20 09:43:34.048583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.651 [2024-11-20 09:43:34.048687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.651 [2024-11-20 09:43:34.048688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2471282 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2471282 /var/tmp/spdk2.sock 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2471282 /var/tmp/spdk2.sock 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:00.909 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2471282 /var/tmp/spdk2.sock 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2471282 ']' 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.910 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.910 [2024-11-20 09:43:34.309576] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:00.910 [2024-11-20 09:43:34.309622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471282 ] 00:05:00.910 [2024-11-20 09:43:34.400240] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2471245 has claimed it. 00:05:00.910 [2024-11-20 09:43:34.400275] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2471282) - No such process 00:05:01.476 ERROR: process (pid: 2471282) is no longer running 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2471245 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2471245 ']' 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2471245 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471245 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471245' 00:05:01.476 killing process with pid 2471245 00:05:01.476 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2471245 00:05:01.476 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2471245 00:05:01.735 00:05:01.735 real 0m1.428s 00:05:01.735 user 0m3.912s 00:05:01.735 sys 0m0.389s 00:05:01.735 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.735 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.735 ************************************ 00:05:01.735 END TEST locking_overlapped_coremask 00:05:01.735 ************************************ 00:05:01.994 09:43:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:01.994 09:43:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.994 09:43:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.994 09:43:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.994 ************************************ 00:05:01.994 START TEST locking_overlapped_coremask_via_rpc 00:05:01.994 ************************************ 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2471538 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2471538 /var/tmp/spdk.sock 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2471538 ']' 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.994 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.994 [2024-11-20 09:43:35.423798] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:01.994 [2024-11-20 09:43:35.423840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471538 ] 00:05:01.994 [2024-11-20 09:43:35.498983] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.994 [2024-11-20 09:43:35.499007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.994 [2024-11-20 09:43:35.539036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.994 [2024-11-20 09:43:35.539074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.994 [2024-11-20 09:43:35.539075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2471545 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2471545 /var/tmp/spdk2.sock 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2471545 ']' 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.252 09:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.252 [2024-11-20 09:43:35.814186] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:02.252 [2024-11-20 09:43:35.814236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471545 ] 00:05:02.510 [2024-11-20 09:43:35.903036] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.510 [2024-11-20 09:43:35.903065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.510 [2024-11-20 09:43:35.985463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.510 [2024-11-20 09:43:35.989248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.510 [2024-11-20 09:43:35.989249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.153 [2024-11-20 09:43:36.681277] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2471538 has claimed it. 00:05:03.153 request: 00:05:03.153 { 00:05:03.153 "method": "framework_enable_cpumask_locks", 00:05:03.153 "req_id": 1 00:05:03.153 } 00:05:03.153 Got JSON-RPC error response 00:05:03.153 response: 00:05:03.153 { 00:05:03.153 "code": -32603, 00:05:03.153 "message": "Failed to claim CPU core: 2" 00:05:03.153 } 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2471538 /var/tmp/spdk.sock 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2471538 ']' 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.153 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2471545 /var/tmp/spdk2.sock 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2471545 ']' 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.435 09:43:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.707 00:05:03.707 real 0m1.720s 00:05:03.707 user 0m0.812s 00:05:03.707 sys 0m0.143s 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.707 09:43:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.707 ************************************ 00:05:03.707 END TEST locking_overlapped_coremask_via_rpc 00:05:03.707 ************************************ 00:05:03.707 09:43:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:03.707 09:43:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2471538 ]] 00:05:03.707 09:43:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2471538 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2471538 ']' 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2471538 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471538 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471538' 00:05:03.707 killing process with pid 2471538 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2471538 00:05:03.707 09:43:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2471538 00:05:03.965 09:43:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2471545 ]] 00:05:03.965 09:43:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2471545 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2471545 ']' 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2471545 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471545 00:05:03.965 09:43:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.223 09:43:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.223 09:43:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471545' 00:05:04.223 killing process with pid 2471545 00:05:04.223 09:43:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2471545 00:05:04.223 09:43:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2471545 00:05:04.482 09:43:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.482 09:43:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:04.482 09:43:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2471538 ]] 00:05:04.483 09:43:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2471538 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2471538 ']' 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2471538 00:05:04.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2471538) - No such process 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2471538 is not found' 00:05:04.483 Process with pid 2471538 is not found 00:05:04.483 09:43:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2471545 ]] 00:05:04.483 09:43:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2471545 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2471545 ']' 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2471545 00:05:04.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2471545) - No such process 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2471545 is not found' 00:05:04.483 Process with pid 2471545 is not found 00:05:04.483 09:43:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.483 00:05:04.483 real 0m14.865s 00:05:04.483 user 0m25.347s 00:05:04.483 sys 0m5.062s 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.483 09:43:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.483 ************************************ 00:05:04.483 END TEST cpu_locks 00:05:04.483 ************************************ 00:05:04.483 00:05:04.483 real 0m39.540s 00:05:04.483 user 1m14.539s 00:05:04.483 sys 0m8.589s 00:05:04.483 09:43:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.483 09:43:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.483 ************************************ 00:05:04.483 END TEST event 00:05:04.483 ************************************ 00:05:04.483 09:43:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.483 09:43:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.483 09:43:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.483 09:43:37 -- common/autotest_common.sh@10 -- # set +x 00:05:04.483 ************************************ 00:05:04.483 START TEST thread 00:05:04.483 ************************************ 00:05:04.483 09:43:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.483 * Looking for test storage... 00:05:04.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:04.483 09:43:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.483 09:43:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.483 09:43:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.741 09:43:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.741 09:43:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.741 09:43:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.741 09:43:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.741 09:43:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.741 09:43:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.741 09:43:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.741 09:43:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.741 09:43:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.741 09:43:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.741 09:43:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.742 09:43:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.742 09:43:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:04.742 09:43:38 thread -- scripts/common.sh@345 -- # : 1 00:05:04.742 09:43:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.742 09:43:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.742 09:43:38 thread -- scripts/common.sh@365 -- # decimal 1 00:05:04.742 09:43:38 thread -- scripts/common.sh@353 -- # local d=1 00:05:04.742 09:43:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.742 09:43:38 thread -- scripts/common.sh@355 -- # echo 1 00:05:04.742 09:43:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.742 09:43:38 thread -- scripts/common.sh@366 -- # decimal 2 00:05:04.742 09:43:38 thread -- scripts/common.sh@353 -- # local d=2 00:05:04.742 09:43:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.742 09:43:38 thread -- scripts/common.sh@355 -- # echo 2 00:05:04.742 09:43:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.742 09:43:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.742 09:43:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.742 09:43:38 thread -- scripts/common.sh@368 -- # return 0 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.742 --rc genhtml_branch_coverage=1 00:05:04.742 --rc genhtml_function_coverage=1 00:05:04.742 --rc genhtml_legend=1 00:05:04.742 --rc geninfo_all_blocks=1 00:05:04.742 --rc geninfo_unexecuted_blocks=1 00:05:04.742 00:05:04.742 ' 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.742 --rc genhtml_branch_coverage=1 00:05:04.742 --rc genhtml_function_coverage=1 00:05:04.742 --rc genhtml_legend=1 00:05:04.742 --rc geninfo_all_blocks=1 00:05:04.742 --rc geninfo_unexecuted_blocks=1 00:05:04.742 00:05:04.742 ' 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.742 --rc genhtml_branch_coverage=1 00:05:04.742 --rc genhtml_function_coverage=1 00:05:04.742 --rc genhtml_legend=1 00:05:04.742 --rc geninfo_all_blocks=1 00:05:04.742 --rc geninfo_unexecuted_blocks=1 00:05:04.742 00:05:04.742 ' 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.742 --rc genhtml_branch_coverage=1 00:05:04.742 --rc genhtml_function_coverage=1 00:05:04.742 --rc genhtml_legend=1 00:05:04.742 --rc geninfo_all_blocks=1 00:05:04.742 --rc geninfo_unexecuted_blocks=1 00:05:04.742 00:05:04.742 ' 00:05:04.742 09:43:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.742 09:43:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.742 ************************************ 00:05:04.742 START TEST thread_poller_perf 00:05:04.742 ************************************ 00:05:04.742 09:43:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.742 [2024-11-20 09:43:38.188026] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:04.742 [2024-11-20 09:43:38.188085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472107 ] 00:05:04.742 [2024-11-20 09:43:38.269228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.742 [2024-11-20 09:43:38.309409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.742 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:06.116 [2024-11-20T08:43:39.698Z] ====================================== 00:05:06.116 [2024-11-20T08:43:39.698Z] busy:2108206386 (cyc) 00:05:06.116 [2024-11-20T08:43:39.698Z] total_run_count: 415000 00:05:06.116 [2024-11-20T08:43:39.698Z] tsc_hz: 2100000000 (cyc) 00:05:06.116 [2024-11-20T08:43:39.698Z] ====================================== 00:05:06.116 [2024-11-20T08:43:39.698Z] poller_cost: 5080 (cyc), 2419 (nsec) 00:05:06.116 00:05:06.116 real 0m1.187s 00:05:06.116 user 0m1.094s 00:05:06.116 sys 0m0.089s 00:05:06.116 09:43:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.116 09:43:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.116 ************************************ 00:05:06.116 END TEST thread_poller_perf 00:05:06.116 ************************************ 00:05:06.116 09:43:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.116 09:43:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:06.116 09:43:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.116 09:43:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.116 ************************************ 00:05:06.116 START TEST thread_poller_perf 00:05:06.116 ************************************ 00:05:06.116 09:43:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.116 [2024-11-20 09:43:39.447050] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:06.116 [2024-11-20 09:43:39.447124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472355 ] 00:05:06.116 [2024-11-20 09:43:39.522408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.116 [2024-11-20 09:43:39.562123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.116 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:07.050 [2024-11-20T08:43:40.632Z] ====================================== 00:05:07.050 [2024-11-20T08:43:40.632Z] busy:2101699500 (cyc) 00:05:07.050 [2024-11-20T08:43:40.632Z] total_run_count: 5517000 00:05:07.050 [2024-11-20T08:43:40.632Z] tsc_hz: 2100000000 (cyc) 00:05:07.050 [2024-11-20T08:43:40.632Z] ====================================== 00:05:07.050 [2024-11-20T08:43:40.632Z] poller_cost: 380 (cyc), 180 (nsec) 00:05:07.050 00:05:07.050 real 0m1.178s 00:05:07.050 user 0m1.106s 00:05:07.050 sys 0m0.068s 00:05:07.050 09:43:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.050 09:43:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.050 ************************************ 00:05:07.050 END TEST thread_poller_perf 00:05:07.050 ************************************ 00:05:07.308 09:43:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:07.308 00:05:07.308 real 0m2.680s 00:05:07.308 user 0m2.356s 00:05:07.308 sys 0m0.340s 00:05:07.308 09:43:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.308 09:43:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.308 ************************************ 00:05:07.308 END TEST thread 00:05:07.308 ************************************ 00:05:07.308 09:43:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:07.308 09:43:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.308 09:43:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.308 09:43:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.308 09:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.308 ************************************ 00:05:07.308 START TEST app_cmdline 00:05:07.308 ************************************ 00:05:07.308 09:43:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.308 * Looking for test storage... 00:05:07.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:07.308 09:43:40 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.308 09:43:40 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.308 09:43:40 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.309 09:43:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.309 --rc genhtml_branch_coverage=1 00:05:07.309 --rc genhtml_function_coverage=1 00:05:07.309 --rc genhtml_legend=1 00:05:07.309 --rc geninfo_all_blocks=1 00:05:07.309 --rc geninfo_unexecuted_blocks=1 00:05:07.309 00:05:07.309 ' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.309 --rc genhtml_branch_coverage=1 00:05:07.309 --rc genhtml_function_coverage=1 00:05:07.309 --rc genhtml_legend=1 00:05:07.309 --rc geninfo_all_blocks=1 00:05:07.309 --rc geninfo_unexecuted_blocks=1 00:05:07.309 00:05:07.309 ' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.309 --rc genhtml_branch_coverage=1 00:05:07.309 --rc genhtml_function_coverage=1 00:05:07.309 --rc genhtml_legend=1 00:05:07.309 --rc geninfo_all_blocks=1 00:05:07.309 --rc geninfo_unexecuted_blocks=1 00:05:07.309 00:05:07.309 ' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.309 --rc genhtml_branch_coverage=1 00:05:07.309 --rc genhtml_function_coverage=1 00:05:07.309 --rc genhtml_legend=1 00:05:07.309 --rc geninfo_all_blocks=1 00:05:07.309 --rc geninfo_unexecuted_blocks=1 00:05:07.309 00:05:07.309 ' 00:05:07.309 09:43:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:07.309 09:43:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2472663 00:05:07.309 09:43:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:07.309 09:43:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2472663 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2472663 ']' 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.309 09:43:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.567 [2024-11-20 09:43:40.934106] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:07.567 [2024-11-20 09:43:40.934160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472663 ] 00:05:07.567 [2024-11-20 09:43:41.008434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.567 [2024-11-20 09:43:41.050187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.825 09:43:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.825 09:43:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:07.825 09:43:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:08.082 { 00:05:08.082 "version": "SPDK v25.01-pre git sha1 c02c5e04b", 00:05:08.082 "fields": { 00:05:08.082 "major": 25, 00:05:08.082 "minor": 1, 00:05:08.082 "patch": 0, 00:05:08.082 "suffix": "-pre", 00:05:08.082 "commit": "c02c5e04b" 00:05:08.082 } 00:05:08.082 } 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:08.082 09:43:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:08.082 09:43:41 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.341 request: 00:05:08.341 { 00:05:08.341 "method": "env_dpdk_get_mem_stats", 00:05:08.341 "req_id": 1 00:05:08.341 } 00:05:08.341 Got JSON-RPC error response 00:05:08.341 response: 00:05:08.341 { 00:05:08.341 "code": -32601, 00:05:08.341 "message": "Method not found" 00:05:08.341 } 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.341 09:43:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2472663 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2472663 ']' 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2472663 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472663 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472663' 00:05:08.341 killing process with pid 2472663 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 2472663 00:05:08.341 09:43:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 2472663 00:05:08.600 00:05:08.600 real 0m1.325s 00:05:08.600 user 0m1.540s 00:05:08.600 sys 0m0.441s 00:05:08.600 09:43:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.600 09:43:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.600 ************************************ 00:05:08.600 END TEST app_cmdline 00:05:08.600 ************************************ 00:05:08.600 09:43:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.600 09:43:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.600 09:43:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.600 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.600 ************************************ 00:05:08.600 START TEST version 00:05:08.600 ************************************ 00:05:08.600 09:43:42 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.859 * Looking for test storage... 00:05:08.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:08.859 09:43:42 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.859 09:43:42 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.859 09:43:42 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.859 09:43:42 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.859 09:43:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.859 09:43:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.859 09:43:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.859 09:43:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.859 09:43:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.859 09:43:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.859 09:43:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.859 09:43:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.859 09:43:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.859 09:43:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.859 09:43:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.859 09:43:42 version -- scripts/common.sh@344 -- # case "$op" in 00:05:08.859 09:43:42 version -- scripts/common.sh@345 -- # : 1 00:05:08.859 09:43:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.860 09:43:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.860 09:43:42 version -- scripts/common.sh@365 -- # decimal 1 00:05:08.860 09:43:42 version -- scripts/common.sh@353 -- # local d=1 00:05:08.860 09:43:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.860 09:43:42 version -- scripts/common.sh@355 -- # echo 1 00:05:08.860 09:43:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.860 09:43:42 version -- scripts/common.sh@366 -- # decimal 2 00:05:08.860 09:43:42 version -- scripts/common.sh@353 -- # local d=2 00:05:08.860 09:43:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.860 09:43:42 version -- scripts/common.sh@355 -- # echo 2 00:05:08.860 09:43:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.860 09:43:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.860 09:43:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.860 09:43:42 version -- scripts/common.sh@368 -- # return 0 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.860 --rc genhtml_branch_coverage=1 00:05:08.860 --rc genhtml_function_coverage=1 00:05:08.860 --rc genhtml_legend=1 00:05:08.860 --rc geninfo_all_blocks=1 00:05:08.860 --rc geninfo_unexecuted_blocks=1 00:05:08.860 00:05:08.860 ' 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.860 --rc genhtml_branch_coverage=1 00:05:08.860 --rc genhtml_function_coverage=1 00:05:08.860 --rc genhtml_legend=1 00:05:08.860 --rc geninfo_all_blocks=1 00:05:08.860 --rc geninfo_unexecuted_blocks=1 00:05:08.860 00:05:08.860 ' 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.860 --rc genhtml_branch_coverage=1 00:05:08.860 --rc genhtml_function_coverage=1 00:05:08.860 --rc genhtml_legend=1 00:05:08.860 --rc geninfo_all_blocks=1 00:05:08.860 --rc geninfo_unexecuted_blocks=1 00:05:08.860 00:05:08.860 ' 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.860 --rc genhtml_branch_coverage=1 00:05:08.860 --rc genhtml_function_coverage=1 00:05:08.860 --rc genhtml_legend=1 00:05:08.860 --rc geninfo_all_blocks=1 00:05:08.860 --rc geninfo_unexecuted_blocks=1 00:05:08.860 00:05:08.860 ' 00:05:08.860 09:43:42 version -- app/version.sh@17 -- # get_header_version major 00:05:08.860 09:43:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # cut -f2 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.860 09:43:42 version -- app/version.sh@17 -- # major=25 00:05:08.860 09:43:42 version -- app/version.sh@18 -- # get_header_version minor 00:05:08.860 09:43:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # cut -f2 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.860 09:43:42 version -- app/version.sh@18 -- # minor=1 00:05:08.860 09:43:42 version -- app/version.sh@19 -- # get_header_version patch 00:05:08.860 09:43:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # cut -f2 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.860 09:43:42 version -- app/version.sh@19 -- # patch=0 00:05:08.860 09:43:42 version -- app/version.sh@20 -- # get_header_version suffix 00:05:08.860 09:43:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # cut -f2 00:05:08.860 09:43:42 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.860 09:43:42 version -- app/version.sh@20 -- # suffix=-pre 00:05:08.860 09:43:42 version -- app/version.sh@22 -- # version=25.1 00:05:08.860 09:43:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:08.860 09:43:42 version -- app/version.sh@28 -- # version=25.1rc0 00:05:08.860 09:43:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:08.860 09:43:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:08.860 09:43:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:08.860 09:43:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:08.860 00:05:08.860 real 0m0.243s 00:05:08.860 user 0m0.150s 00:05:08.860 sys 0m0.139s 00:05:08.860 09:43:42 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.860 09:43:42 version -- common/autotest_common.sh@10 -- # set +x 00:05:08.860 ************************************ 00:05:08.860 END TEST version 00:05:08.860 ************************************ 00:05:08.860 09:43:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:08.860 09:43:42 -- spdk/autotest.sh@194 -- # uname -s 00:05:08.860 09:43:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:08.860 09:43:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.860 09:43:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:08.860 09:43:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:08.860 09:43:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.860 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:08.860 09:43:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:08.860 09:43:42 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:08.860 09:43:42 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:08.860 09:43:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:08.860 09:43:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.860 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 ************************************ 00:05:09.121 START TEST nvmf_tcp 00:05:09.121 ************************************ 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:09.121 * Looking for test storage... 00:05:09.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.121 09:43:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.121 --rc genhtml_branch_coverage=1 00:05:09.121 --rc genhtml_function_coverage=1 00:05:09.121 --rc genhtml_legend=1 00:05:09.121 --rc geninfo_all_blocks=1 00:05:09.121 --rc geninfo_unexecuted_blocks=1 00:05:09.121 00:05:09.121 ' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.121 --rc genhtml_branch_coverage=1 00:05:09.121 --rc genhtml_function_coverage=1 00:05:09.121 --rc genhtml_legend=1 00:05:09.121 --rc geninfo_all_blocks=1 00:05:09.121 --rc geninfo_unexecuted_blocks=1 00:05:09.121 00:05:09.121 ' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.121 --rc genhtml_branch_coverage=1 00:05:09.121 --rc genhtml_function_coverage=1 00:05:09.121 --rc genhtml_legend=1 00:05:09.121 --rc geninfo_all_blocks=1 00:05:09.121 --rc geninfo_unexecuted_blocks=1 00:05:09.121 00:05:09.121 ' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.121 --rc genhtml_branch_coverage=1 00:05:09.121 --rc genhtml_function_coverage=1 00:05:09.121 --rc genhtml_legend=1 00:05:09.121 --rc geninfo_all_blocks=1 00:05:09.121 --rc geninfo_unexecuted_blocks=1 00:05:09.121 00:05:09.121 ' 00:05:09.121 09:43:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:09.121 09:43:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.121 09:43:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.121 09:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 ************************************ 00:05:09.121 START TEST nvmf_target_core 00:05:09.121 ************************************ 00:05:09.121 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.381 * Looking for test storage... 00:05:09.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.381 --rc genhtml_branch_coverage=1 00:05:09.381 --rc genhtml_function_coverage=1 00:05:09.381 --rc genhtml_legend=1 00:05:09.381 --rc geninfo_all_blocks=1 00:05:09.381 --rc geninfo_unexecuted_blocks=1 00:05:09.381 00:05:09.381 ' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.381 --rc genhtml_branch_coverage=1 00:05:09.381 --rc genhtml_function_coverage=1 00:05:09.381 --rc genhtml_legend=1 00:05:09.381 --rc geninfo_all_blocks=1 00:05:09.381 --rc geninfo_unexecuted_blocks=1 00:05:09.381 00:05:09.381 ' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.381 --rc genhtml_branch_coverage=1 00:05:09.381 --rc genhtml_function_coverage=1 00:05:09.381 --rc genhtml_legend=1 00:05:09.381 --rc geninfo_all_blocks=1 00:05:09.381 --rc geninfo_unexecuted_blocks=1 00:05:09.381 00:05:09.381 ' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.381 --rc genhtml_branch_coverage=1 00:05:09.381 --rc genhtml_function_coverage=1 00:05:09.381 --rc genhtml_legend=1 00:05:09.381 --rc geninfo_all_blocks=1 00:05:09.381 --rc geninfo_unexecuted_blocks=1 00:05:09.381 00:05:09.381 ' 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:09.381 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:09.382 ************************************ 00:05:09.382 START TEST nvmf_abort 00:05:09.382 ************************************ 00:05:09.382 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.642 * Looking for test storage... 00:05:09.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:09.642 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.642 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.642 09:43:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.642 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.643 --rc genhtml_branch_coverage=1 00:05:09.643 --rc genhtml_function_coverage=1 00:05:09.643 --rc genhtml_legend=1 00:05:09.643 --rc geninfo_all_blocks=1 00:05:09.643 --rc geninfo_unexecuted_blocks=1 00:05:09.643 00:05:09.643 ' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.643 --rc genhtml_branch_coverage=1 00:05:09.643 --rc genhtml_function_coverage=1 00:05:09.643 --rc genhtml_legend=1 00:05:09.643 --rc geninfo_all_blocks=1 00:05:09.643 --rc geninfo_unexecuted_blocks=1 00:05:09.643 00:05:09.643 ' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.643 --rc genhtml_branch_coverage=1 00:05:09.643 --rc genhtml_function_coverage=1 00:05:09.643 --rc genhtml_legend=1 00:05:09.643 --rc geninfo_all_blocks=1 00:05:09.643 --rc geninfo_unexecuted_blocks=1 00:05:09.643 00:05:09.643 ' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.643 --rc genhtml_branch_coverage=1 00:05:09.643 --rc genhtml_function_coverage=1 00:05:09.643 --rc genhtml_legend=1 00:05:09.643 --rc geninfo_all_blocks=1 00:05:09.643 --rc geninfo_unexecuted_blocks=1 00:05:09.643 00:05:09.643 ' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:09.643 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:09.644 09:43:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.215 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:16.215 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:16.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:16.216 Found net devices under 0000:86:00.0: cvl_0_0 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:16.216 Found net devices under 0000:86:00.1: cvl_0_1 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.216 09:43:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:05:16.216 00:05:16.216 --- 10.0.0.2 ping statistics --- 00:05:16.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.216 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:05:16.216 00:05:16.216 --- 10.0.0.1 ping statistics --- 00:05:16.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.216 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2476269 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2476269 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2476269 ']' 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.216 09:43:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.216 [2024-11-20 09:43:49.230012] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:16.216 [2024-11-20 09:43:49.230055] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.217 [2024-11-20 09:43:49.310271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.217 [2024-11-20 09:43:49.353367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.217 [2024-11-20 09:43:49.353401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.217 [2024-11-20 09:43:49.353408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.217 [2024-11-20 09:43:49.353414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.217 [2024-11-20 09:43:49.353419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.217 [2024-11-20 09:43:49.354893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.217 [2024-11-20 09:43:49.354999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.217 [2024-11-20 09:43:49.355000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.783 [2024-11-20 09:43:50.112832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.783 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 Malloc0 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 Delay0 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 [2024-11-20 09:43:50.193228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.784 09:43:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.784 [2024-11-20 09:43:50.289053] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.315 Initializing NVMe Controllers 00:05:19.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.315 controller IO queue size 128 less than required 00:05:19.315 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.315 Initialization complete. Launching workers. 00:05:19.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37458 00:05:19.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37519, failed to submit 62 00:05:19.315 success 37462, unsuccessful 57, failed 0 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.315 rmmod nvme_tcp 00:05:19.315 rmmod nvme_fabrics 00:05:19.315 rmmod nvme_keyring 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2476269 ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2476269 ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476269' 00:05:19.315 killing process with pid 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2476269 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.315 09:43:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.218 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.218 00:05:21.218 real 0m11.850s 00:05:21.218 user 0m13.584s 00:05:21.218 sys 0m5.441s 00:05:21.218 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.218 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.218 ************************************ 00:05:21.218 END TEST nvmf_abort 00:05:21.218 ************************************ 00:05:21.476 09:43:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.476 09:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.476 09:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.476 09:43:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:21.476 ************************************ 00:05:21.476 START TEST nvmf_ns_hotplug_stress 00:05:21.476 ************************************ 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.477 * Looking for test storage... 00:05:21.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.477 09:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.477 --rc genhtml_branch_coverage=1 00:05:21.477 --rc genhtml_function_coverage=1 00:05:21.477 --rc genhtml_legend=1 00:05:21.477 --rc geninfo_all_blocks=1 00:05:21.477 --rc geninfo_unexecuted_blocks=1 00:05:21.477 00:05:21.477 ' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.477 --rc genhtml_branch_coverage=1 00:05:21.477 --rc genhtml_function_coverage=1 00:05:21.477 --rc genhtml_legend=1 00:05:21.477 --rc geninfo_all_blocks=1 00:05:21.477 --rc geninfo_unexecuted_blocks=1 00:05:21.477 00:05:21.477 ' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.477 --rc genhtml_branch_coverage=1 00:05:21.477 --rc genhtml_function_coverage=1 00:05:21.477 --rc genhtml_legend=1 00:05:21.477 --rc geninfo_all_blocks=1 00:05:21.477 --rc geninfo_unexecuted_blocks=1 00:05:21.477 00:05:21.477 ' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.477 --rc genhtml_branch_coverage=1 00:05:21.477 --rc genhtml_function_coverage=1 00:05:21.477 --rc genhtml_legend=1 00:05:21.477 --rc geninfo_all_blocks=1 00:05:21.477 --rc geninfo_unexecuted_blocks=1 00:05:21.477 00:05:21.477 ' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.477 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.478 09:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.048 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:28.049 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:28.049 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:28.049 Found net devices under 0000:86:00.0: cvl_0_0 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:28.049 Found net devices under 0000:86:00.1: cvl_0_1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.049 09:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.049 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.049 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.049 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.049 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:05:28.049 00:05:28.049 --- 10.0.0.2 ping statistics --- 00:05:28.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.050 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:05:28.050 00:05:28.050 --- 10.0.0.1 ping statistics --- 00:05:28.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.050 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2480388 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2480388 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2480388 ']' 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.050 [2024-11-20 09:44:01.127047] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:05:28.050 [2024-11-20 09:44:01.127090] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.050 [2024-11-20 09:44:01.205452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.050 [2024-11-20 09:44:01.244776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.050 [2024-11-20 09:44:01.244814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.050 [2024-11-20 09:44:01.244821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.050 [2024-11-20 09:44:01.244826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.050 [2024-11-20 09:44:01.244831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.050 [2024-11-20 09:44:01.246284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.050 [2024-11-20 09:44:01.246391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.050 [2024-11-20 09:44:01.246392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:28.050 [2024-11-20 09:44:01.542218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.050 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.309 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.568 [2024-11-20 09:44:01.959706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.568 09:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.826 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:28.826 Malloc0 00:05:29.084 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:29.084 Delay0 00:05:29.084 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.342 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:29.600 NULL1 00:05:29.600 09:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:29.600 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:29.600 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2480774 00:05:29.600 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:29.600 09:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.974 Read completed with error (sct=0, sc=11) 00:05:30.974 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.232 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:31.232 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:31.232 true 00:05:31.489 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:31.489 09:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.054 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.312 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:32.312 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:32.569 true 00:05:32.569 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:32.569 09:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.826 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.826 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:32.826 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:33.084 true 00:05:33.084 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:33.084 09:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.455 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.455 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:34.455 09:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:34.455 true 00:05:34.714 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:34.714 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.714 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.972 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:34.972 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:35.230 true 00:05:35.230 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:35.230 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.487 09:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.487 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:35.487 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:35.747 true 00:05:35.747 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:35.747 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.003 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.260 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:36.260 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:36.260 true 00:05:36.260 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:36.260 09:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.631 09:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.631 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:37.631 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:37.888 true 00:05:37.888 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:37.888 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.146 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.404 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:38.404 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:38.404 true 00:05:38.404 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:38.404 09:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.776 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:39.776 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:40.033 true 00:05:40.033 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:40.033 09:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.968 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.226 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:41.226 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:41.226 true 00:05:41.226 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:41.226 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.483 09:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.741 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:41.741 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:41.741 true 00:05:41.999 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:41.999 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.999 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.257 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:42.257 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:42.515 true 00:05:42.515 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:42.515 09:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.468 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.468 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:43.468 09:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:43.726 true 00:05:43.726 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:43.726 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.984 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.242 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:44.242 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:44.242 true 00:05:44.242 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:44.242 09:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 09:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.614 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:45.614 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:45.895 true 00:05:45.895 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:45.895 09:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.827 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.827 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:46.827 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:47.084 true 00:05:47.084 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:47.084 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.341 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.597 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:47.597 09:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:47.597 true 00:05:47.597 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:47.597 09:44:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.969 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.969 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:48.969 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:49.227 true 00:05:49.227 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:49.227 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.485 09:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.485 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:49.485 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.742 true 00:05:49.742 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:49.742 09:44:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.113 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:51.113 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:51.372 true 00:05:51.372 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:51.372 09:44:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.305 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.305 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:52.305 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:52.603 true 00:05:52.603 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:52.603 09:44:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.901 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.901 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:52.901 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:53.192 true 00:05:53.192 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:53.192 09:44:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.129 09:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.388 09:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:54.388 09:44:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:54.645 true 00:05:54.645 09:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:54.645 09:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.578 09:44:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.578 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:55.578 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:55.835 true 00:05:55.835 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:55.835 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.091 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.349 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:56.349 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:56.349 true 00:05:56.349 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:56.349 09:44:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.722 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:57.722 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:57.979 true 00:05:57.979 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:57.979 09:44:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.913 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.913 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:58.913 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:59.170 true 00:05:59.170 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:59.170 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.427 09:44:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.685 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:59.685 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:59.685 true 00:05:59.685 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:05:59.685 09:44:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.059 Initializing NVMe Controllers 00:06:01.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.059 Controller IO queue size 128, less than required. 00:06:01.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.059 Controller IO queue size 128, less than required. 00:06:01.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:01.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:01.059 Initialization complete. Launching workers. 00:06:01.059 ======================================================== 00:06:01.059 Latency(us) 00:06:01.059 Device Information : IOPS MiB/s Average min max 00:06:01.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1989.37 0.97 42252.59 1105.85 1013017.79 00:06:01.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16207.00 7.91 7897.69 1293.96 443603.31 00:06:01.059 ======================================================== 00:06:01.059 Total : 18196.37 8.88 11653.63 1105.85 1013017.79 00:06:01.059 00:06:01.059 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.059 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:01.059 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:01.318 true 00:06:01.318 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2480774 00:06:01.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2480774) - No such process 00:06:01.318 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2480774 00:06:01.318 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.577 09:44:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:01.836 null0 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.836 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:02.095 null1 00:06:02.095 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.095 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.095 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:02.354 null2 00:06:02.354 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.354 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.354 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:02.612 null3 00:06:02.612 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.612 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.612 09:44:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:02.612 null4 00:06:02.612 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.612 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.612 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:02.872 null5 00:06:02.872 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.872 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.872 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:03.130 null6 00:06:03.131 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.131 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.131 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:03.390 null7 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2486487 2486488 2486491 2486493 2486495 2486497 2486499 2486501 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:03.390 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.391 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.391 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.649 09:44:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.649 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.650 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.908 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.166 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.425 09:44:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.683 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.943 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.201 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.461 09:44:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.461 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.725 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.985 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.245 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.504 09:44:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.504 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.762 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.763 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.763 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.763 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.763 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.763 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.022 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.281 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.281 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.281 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.282 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.282 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.282 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.282 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.282 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.541 09:44:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.541 rmmod nvme_tcp 00:06:07.541 rmmod nvme_fabrics 00:06:07.541 rmmod nvme_keyring 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2480388 ']' 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2480388 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2480388 ']' 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2480388 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480388 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480388' 00:06:07.541 killing process with pid 2480388 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2480388 00:06:07.541 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2480388 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.800 09:44:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.336 00:06:10.336 real 0m48.457s 00:06:10.336 user 3m17.205s 00:06:10.336 sys 0m15.644s 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 ************************************ 00:06:10.336 END TEST nvmf_ns_hotplug_stress 00:06:10.336 ************************************ 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 ************************************ 00:06:10.336 START TEST nvmf_delete_subsystem 00:06:10.336 ************************************ 00:06:10.336 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.336 * Looking for test storage... 00:06:10.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.337 --rc genhtml_branch_coverage=1 00:06:10.337 --rc genhtml_function_coverage=1 00:06:10.337 --rc genhtml_legend=1 00:06:10.337 --rc geninfo_all_blocks=1 00:06:10.337 --rc geninfo_unexecuted_blocks=1 00:06:10.337 00:06:10.337 ' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.337 --rc genhtml_branch_coverage=1 00:06:10.337 --rc genhtml_function_coverage=1 00:06:10.337 --rc genhtml_legend=1 00:06:10.337 --rc geninfo_all_blocks=1 00:06:10.337 --rc geninfo_unexecuted_blocks=1 00:06:10.337 00:06:10.337 ' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.337 --rc genhtml_branch_coverage=1 00:06:10.337 --rc genhtml_function_coverage=1 00:06:10.337 --rc genhtml_legend=1 00:06:10.337 --rc geninfo_all_blocks=1 00:06:10.337 --rc geninfo_unexecuted_blocks=1 00:06:10.337 00:06:10.337 ' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.337 --rc genhtml_branch_coverage=1 00:06:10.337 --rc genhtml_function_coverage=1 00:06:10.337 --rc genhtml_legend=1 00:06:10.337 --rc geninfo_all_blocks=1 00:06:10.337 --rc geninfo_unexecuted_blocks=1 00:06:10.337 00:06:10.337 ' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.337 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.338 09:44:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.912 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.913 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.913 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:06:16.913 00:06:16.913 --- 10.0.0.2 ping statistics --- 00:06:16.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.913 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:06:16.913 00:06:16.913 --- 10.0.0.1 ping statistics --- 00:06:16.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.913 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.913 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2490889 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2490889 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2490889 ']' 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 [2024-11-20 09:44:49.695058] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:06:16.914 [2024-11-20 09:44:49.695106] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.914 [2024-11-20 09:44:49.774977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.914 [2024-11-20 09:44:49.815881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.914 [2024-11-20 09:44:49.815918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.914 [2024-11-20 09:44:49.815925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.914 [2024-11-20 09:44:49.815931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.914 [2024-11-20 09:44:49.815936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.914 [2024-11-20 09:44:49.817063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.914 [2024-11-20 09:44:49.817065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 [2024-11-20 09:44:49.951583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 [2024-11-20 09:44:49.971796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 NULL1 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 Delay0 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.914 09:44:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.914 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.914 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2490915 00:06:16.914 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:16.914 09:44:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:16.914 [2024-11-20 09:44:50.082756] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:18.814 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.814 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.814 09:44:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 starting I/O failed: -6 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 [2024-11-20 09:44:52.287444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe99680 is same with the state(6) to be set 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Write completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.814 Read completed with error (sct=0, sc=8) 00:06:18.815 [2024-11-20 09:44:52.288714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe994a0 is same with the state(6) to be set 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 starting I/O failed: -6 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 [2024-11-20 09:44:52.291979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f612400d350 is same with the state(6) to be set 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:18.815 Write completed with error (sct=0, sc=8) 00:06:18.815 Read completed with error (sct=0, sc=8) 00:06:19.748 [2024-11-20 09:44:53.261069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a9a0 is same with the state(6) to be set 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.748 Write completed with error (sct=0, sc=8) 00:06:19.748 Read completed with error (sct=0, sc=8) 00:06:19.749 [2024-11-20 09:44:53.290639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe992c0 is same with the state(6) to be set 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 [2024-11-20 09:44:53.291003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe99860 is same with the state(6) to be set 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 [2024-11-20 09:44:53.294212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f612400d020 is same with the state(6) to be set 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Read completed with error (sct=0, sc=8) 00:06:19.749 Write completed with error (sct=0, sc=8) 00:06:19.749 [2024-11-20 09:44:53.294654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f612400d680 is same with the state(6) to be set 00:06:19.749 Initializing NVMe Controllers 00:06:19.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:19.749 Controller IO queue size 128, less than required. 00:06:19.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:19.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:19.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:19.749 Initialization complete. Launching workers. 00:06:19.749 ======================================================== 00:06:19.749 Latency(us) 00:06:19.749 Device Information : IOPS MiB/s Average min max 00:06:19.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.30 0.08 903645.17 755.61 1006516.41 00:06:19.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.81 0.08 909980.52 240.61 1009807.80 00:06:19.749 ======================================================== 00:06:19.749 Total : 329.11 0.16 906798.47 240.61 1009807.80 00:06:19.749 00:06:19.749 [2024-11-20 09:44:53.295176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a9a0 (9): Bad file descriptor 00:06:19.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:19.749 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.749 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:19.749 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2490915 00:06:19.749 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2490915 00:06:20.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2490915) - No such process 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2490915 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2490915 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2490915 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.315 [2024-11-20 09:44:53.827944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2491610 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:20.315 09:44:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.574 [2024-11-20 09:44:53.914113] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:20.831 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.831 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:20.831 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.396 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.396 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:21.396 09:44:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.962 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.962 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:21.962 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.528 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.528 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:22.528 09:44:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.093 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.093 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:23.093 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.351 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.351 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:23.351 09:44:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.609 Initializing NVMe Controllers 00:06:23.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.609 Controller IO queue size 128, less than required. 00:06:23.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:23.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:23.609 Initialization complete. Launching workers. 00:06:23.609 ======================================================== 00:06:23.609 Latency(us) 00:06:23.609 Device Information : IOPS MiB/s Average min max 00:06:23.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002190.65 1000126.68 1008138.32 00:06:23.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003118.12 1000160.92 1009170.42 00:06:23.609 ======================================================== 00:06:23.609 Total : 256.00 0.12 1002654.39 1000126.68 1009170.42 00:06:23.609 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2491610 00:06:23.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2491610) - No such process 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2491610 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.867 rmmod nvme_tcp 00:06:23.867 rmmod nvme_fabrics 00:06:23.867 rmmod nvme_keyring 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2490889 ']' 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2490889 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2490889 ']' 00:06:23.867 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2490889 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2490889 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2490889' 00:06:24.126 killing process with pid 2490889 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2490889 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2490889 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:24.126 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.127 09:44:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.663 00:06:26.663 real 0m16.361s 00:06:26.663 user 0m29.412s 00:06:26.663 sys 0m5.641s 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 ************************************ 00:06:26.663 END TEST nvmf_delete_subsystem 00:06:26.663 ************************************ 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 ************************************ 00:06:26.663 START TEST nvmf_host_management 00:06:26.663 ************************************ 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.663 * Looking for test storage... 00:06:26.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.663 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.664 --rc genhtml_branch_coverage=1 00:06:26.664 --rc genhtml_function_coverage=1 00:06:26.664 --rc genhtml_legend=1 00:06:26.664 --rc geninfo_all_blocks=1 00:06:26.664 --rc geninfo_unexecuted_blocks=1 00:06:26.664 00:06:26.664 ' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.664 --rc genhtml_branch_coverage=1 00:06:26.664 --rc genhtml_function_coverage=1 00:06:26.664 --rc genhtml_legend=1 00:06:26.664 --rc geninfo_all_blocks=1 00:06:26.664 --rc geninfo_unexecuted_blocks=1 00:06:26.664 00:06:26.664 ' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.664 --rc genhtml_branch_coverage=1 00:06:26.664 --rc genhtml_function_coverage=1 00:06:26.664 --rc genhtml_legend=1 00:06:26.664 --rc geninfo_all_blocks=1 00:06:26.664 --rc geninfo_unexecuted_blocks=1 00:06:26.664 00:06:26.664 ' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.664 --rc genhtml_branch_coverage=1 00:06:26.664 --rc genhtml_function_coverage=1 00:06:26.664 --rc genhtml_legend=1 00:06:26.664 --rc geninfo_all_blocks=1 00:06:26.664 --rc geninfo_unexecuted_blocks=1 00:06:26.664 00:06:26.664 ' 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.664 09:44:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.664 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.665 09:45:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.235 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:33.236 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:33.236 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:33.236 Found net devices under 0000:86:00.0: cvl_0_0 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:33.236 Found net devices under 0000:86:00.1: cvl_0_1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.236 09:45:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.236 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:33.237 00:06:33.237 --- 10.0.0.2 ping statistics --- 00:06:33.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.237 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:06:33.237 00:06:33.237 --- 10.0.0.1 ping statistics --- 00:06:33.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.237 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2496208 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2496208 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2496208 ']' 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.237 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.237 [2024-11-20 09:45:06.135328] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:06:33.237 [2024-11-20 09:45:06.135371] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.237 [2024-11-20 09:45:06.216561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.237 [2024-11-20 09:45:06.257943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.237 [2024-11-20 09:45:06.257983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.237 [2024-11-20 09:45:06.257990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.237 [2024-11-20 09:45:06.257997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.237 [2024-11-20 09:45:06.258001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.237 [2024-11-20 09:45:06.259632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.237 [2024-11-20 09:45:06.259739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.237 [2024-11-20 09:45:06.259847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.237 [2024-11-20 09:45:06.259848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.495 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.495 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.495 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.495 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.495 09:45:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 [2024-11-20 09:45:07.021616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.495 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 Malloc0 00:06:33.753 [2024-11-20 09:45:07.093341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2496601 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2496601 /var/tmp/bdevperf.sock 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2496601 ']' 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:33.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:33.753 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:33.754 { 00:06:33.754 "params": { 00:06:33.754 "name": "Nvme$subsystem", 00:06:33.754 "trtype": "$TEST_TRANSPORT", 00:06:33.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.754 "adrfam": "ipv4", 00:06:33.754 "trsvcid": "$NVMF_PORT", 00:06:33.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.754 "hdgst": ${hdgst:-false}, 00:06:33.754 "ddgst": ${ddgst:-false} 00:06:33.754 }, 00:06:33.754 "method": "bdev_nvme_attach_controller" 00:06:33.754 } 00:06:33.754 EOF 00:06:33.754 )") 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:33.754 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:33.754 "params": { 00:06:33.754 "name": "Nvme0", 00:06:33.754 "trtype": "tcp", 00:06:33.754 "traddr": "10.0.0.2", 00:06:33.754 "adrfam": "ipv4", 00:06:33.754 "trsvcid": "4420", 00:06:33.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.754 "hdgst": false, 00:06:33.754 "ddgst": false 00:06:33.754 }, 00:06:33.754 "method": "bdev_nvme_attach_controller" 00:06:33.754 }' 00:06:33.754 [2024-11-20 09:45:07.189300] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:06:33.754 [2024-11-20 09:45:07.189350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496601 ] 00:06:33.754 [2024-11-20 09:45:07.267258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.754 [2024-11-20 09:45:07.308237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.011 Running I/O for 10 seconds... 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.011 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.269 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.269 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:34.269 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:34.269 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.528 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.528 [2024-11-20 09:45:07.912152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.528 [2024-11-20 09:45:07.912395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.528 [2024-11-20 09:45:07.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.529 [2024-11-20 09:45:07.912895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.529 [2024-11-20 09:45:07.912902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.912992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.913131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.530 [2024-11-20 09:45:07.913138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.530 [2024-11-20 09:45:07.914090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:34.530 task offset: 103808 on job bdev=Nvme0n1 fails 00:06:34.530 00:06:34.530 Latency(us) 00:06:34.530 [2024-11-20T08:45:08.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.530 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:34.530 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:34.530 Verification LBA range: start 0x0 length 0x400 00:06:34.530 Nvme0n1 : 0.40 1912.00 119.50 159.33 0.00 30077.37 1497.97 27337.87 00:06:34.530 [2024-11-20T08:45:08.112Z] =================================================================================================================== 00:06:34.530 [2024-11-20T08:45:08.112Z] Total : 1912.00 119.50 159.33 0.00 30077.37 1497.97 27337.87 00:06:34.530 [2024-11-20 09:45:07.916466] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.530 [2024-11-20 09:45:07.916488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1478500 (9): Bad file descriptor 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.530 09:45:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:34.530 [2024-11-20 09:45:07.968259] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2496601 00:06:35.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2496601) - No such process 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:35.463 { 00:06:35.463 "params": { 00:06:35.463 "name": "Nvme$subsystem", 00:06:35.463 "trtype": "$TEST_TRANSPORT", 00:06:35.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:35.463 "adrfam": "ipv4", 00:06:35.463 "trsvcid": "$NVMF_PORT", 00:06:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:35.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:35.463 "hdgst": ${hdgst:-false}, 00:06:35.463 "ddgst": ${ddgst:-false} 00:06:35.463 }, 00:06:35.463 "method": "bdev_nvme_attach_controller" 00:06:35.463 } 00:06:35.463 EOF 00:06:35.463 )") 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:35.463 09:45:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:35.463 "params": { 00:06:35.463 "name": "Nvme0", 00:06:35.463 "trtype": "tcp", 00:06:35.463 "traddr": "10.0.0.2", 00:06:35.463 "adrfam": "ipv4", 00:06:35.463 "trsvcid": "4420", 00:06:35.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:35.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:35.463 "hdgst": false, 00:06:35.463 "ddgst": false 00:06:35.463 }, 00:06:35.463 "method": "bdev_nvme_attach_controller" 00:06:35.463 }' 00:06:35.463 [2024-11-20 09:45:08.981274] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:06:35.463 [2024-11-20 09:45:08.981323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496876 ] 00:06:35.720 [2024-11-20 09:45:09.056657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.720 [2024-11-20 09:45:09.095509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.720 Running I/O for 1 seconds... 00:06:37.092 2019.00 IOPS, 126.19 MiB/s 00:06:37.092 Latency(us) 00:06:37.092 [2024-11-20T08:45:10.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.092 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:37.092 Verification LBA range: start 0x0 length 0x400 00:06:37.092 Nvme0n1 : 1.02 2044.49 127.78 0.00 0.00 30709.60 2278.16 26963.38 00:06:37.092 [2024-11-20T08:45:10.674Z] =================================================================================================================== 00:06:37.092 [2024-11-20T08:45:10.674Z] Total : 2044.49 127.78 0.00 0.00 30709.60 2278.16 26963.38 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.092 rmmod nvme_tcp 00:06:37.092 rmmod nvme_fabrics 00:06:37.092 rmmod nvme_keyring 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2496208 ']' 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2496208 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2496208 ']' 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2496208 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496208 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496208' 00:06:37.092 killing process with pid 2496208 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2496208 00:06:37.092 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2496208 00:06:37.351 [2024-11-20 09:45:10.764553] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.351 09:45:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:39.887 00:06:39.887 real 0m13.062s 00:06:39.887 user 0m22.122s 00:06:39.887 sys 0m5.713s 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.887 ************************************ 00:06:39.887 END TEST nvmf_host_management 00:06:39.887 ************************************ 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.887 ************************************ 00:06:39.887 START TEST nvmf_lvol 00:06:39.887 ************************************ 00:06:39.887 09:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:39.887 * Looking for test storage... 00:06:39.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.887 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.888 --rc genhtml_branch_coverage=1 00:06:39.888 --rc genhtml_function_coverage=1 00:06:39.888 --rc genhtml_legend=1 00:06:39.888 --rc geninfo_all_blocks=1 00:06:39.888 --rc geninfo_unexecuted_blocks=1 00:06:39.888 00:06:39.888 ' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.888 --rc genhtml_branch_coverage=1 00:06:39.888 --rc genhtml_function_coverage=1 00:06:39.888 --rc genhtml_legend=1 00:06:39.888 --rc geninfo_all_blocks=1 00:06:39.888 --rc geninfo_unexecuted_blocks=1 00:06:39.888 00:06:39.888 ' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.888 --rc genhtml_branch_coverage=1 00:06:39.888 --rc genhtml_function_coverage=1 00:06:39.888 --rc genhtml_legend=1 00:06:39.888 --rc geninfo_all_blocks=1 00:06:39.888 --rc geninfo_unexecuted_blocks=1 00:06:39.888 00:06:39.888 ' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.888 --rc genhtml_branch_coverage=1 00:06:39.888 --rc genhtml_function_coverage=1 00:06:39.888 --rc genhtml_legend=1 00:06:39.888 --rc geninfo_all_blocks=1 00:06:39.888 --rc geninfo_unexecuted_blocks=1 00:06:39.888 00:06:39.888 ' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:39.888 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:39.889 09:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:46.464 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:46.464 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:46.464 Found net devices under 0000:86:00.0: cvl_0_0 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:46.464 Found net devices under 0000:86:00.1: cvl_0_1 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.464 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.465 09:45:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:06:46.465 00:06:46.465 --- 10.0.0.2 ping statistics --- 00:06:46.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.465 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:06:46.465 00:06:46.465 --- 10.0.0.1 ping statistics --- 00:06:46.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.465 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2500650 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2500650 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2500650 ']' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.465 [2024-11-20 09:45:19.276375] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:06:46.465 [2024-11-20 09:45:19.276428] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.465 [2024-11-20 09:45:19.356137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.465 [2024-11-20 09:45:19.397795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.465 [2024-11-20 09:45:19.397830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.465 [2024-11-20 09:45:19.397838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.465 [2024-11-20 09:45:19.397843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.465 [2024-11-20 09:45:19.397848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.465 [2024-11-20 09:45:19.399117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.465 [2024-11-20 09:45:19.399251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.465 [2024-11-20 09:45:19.399252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.465 [2024-11-20 09:45:19.708177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:46.465 09:45:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.737 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.737 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:47.024 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:47.326 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9b608a92-257b-4a35-a7e3-cac476091516 00:06:47.326 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b608a92-257b-4a35-a7e3-cac476091516 lvol 20 00:06:47.326 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1b311b64-d586-437e-90c1-aca890786ee0 00:06:47.326 09:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.584 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b311b64-d586-437e-90c1-aca890786ee0 00:06:47.842 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.842 [2024-11-20 09:45:21.360612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.842 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.100 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2501147 00:06:48.100 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.100 09:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:49.035 09:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1b311b64-d586-437e-90c1-aca890786ee0 MY_SNAPSHOT 00:06:49.292 09:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ee1fbd7d-7be6-4d83-bc48-71ea2f249415 00:06:49.292 09:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1b311b64-d586-437e-90c1-aca890786ee0 30 00:06:49.549 09:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ee1fbd7d-7be6-4d83-bc48-71ea2f249415 MY_CLONE 00:06:49.807 09:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9a0cec1d-da84-4f6a-ba33-386baa35b5ab 00:06:49.807 09:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9a0cec1d-da84-4f6a-ba33-386baa35b5ab 00:06:50.371 09:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2501147 00:06:58.478 Initializing NVMe Controllers 00:06:58.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:58.478 Controller IO queue size 128, less than required. 00:06:58.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:58.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:58.478 Initialization complete. Launching workers. 00:06:58.478 ======================================================== 00:06:58.478 Latency(us) 00:06:58.478 Device Information : IOPS MiB/s Average min max 00:06:58.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12214.30 47.71 10479.53 1651.25 57357.84 00:06:58.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12384.60 48.38 10338.33 1113.37 59807.44 00:06:58.478 ======================================================== 00:06:58.478 Total : 24598.90 96.09 10408.44 1113.37 59807.44 00:06:58.478 00:06:58.478 09:45:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.736 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b311b64-d586-437e-90c1-aca890786ee0 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b608a92-257b-4a35-a7e3-cac476091516 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.995 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.995 rmmod nvme_tcp 00:06:59.253 rmmod nvme_fabrics 00:06:59.253 rmmod nvme_keyring 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2500650 ']' 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2500650 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2500650 ']' 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2500650 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500650 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.253 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.254 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500650' 00:06:59.254 killing process with pid 2500650 00:06:59.254 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2500650 00:06:59.254 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2500650 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.513 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.514 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.514 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.514 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.514 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.514 09:45:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.421 00:07:01.421 real 0m22.009s 00:07:01.421 user 1m2.985s 00:07:01.421 sys 0m7.750s 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.421 ************************************ 00:07:01.421 END TEST nvmf_lvol 00:07:01.421 ************************************ 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.421 09:45:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.681 ************************************ 00:07:01.681 START TEST nvmf_lvs_grow 00:07:01.681 ************************************ 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.681 * Looking for test storage... 00:07:01.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.681 --rc genhtml_branch_coverage=1 00:07:01.681 --rc genhtml_function_coverage=1 00:07:01.681 --rc genhtml_legend=1 00:07:01.681 --rc geninfo_all_blocks=1 00:07:01.681 --rc geninfo_unexecuted_blocks=1 00:07:01.681 00:07:01.681 ' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.681 --rc genhtml_branch_coverage=1 00:07:01.681 --rc genhtml_function_coverage=1 00:07:01.681 --rc genhtml_legend=1 00:07:01.681 --rc geninfo_all_blocks=1 00:07:01.681 --rc geninfo_unexecuted_blocks=1 00:07:01.681 00:07:01.681 ' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.681 --rc genhtml_branch_coverage=1 00:07:01.681 --rc genhtml_function_coverage=1 00:07:01.681 --rc genhtml_legend=1 00:07:01.681 --rc geninfo_all_blocks=1 00:07:01.681 --rc geninfo_unexecuted_blocks=1 00:07:01.681 00:07:01.681 ' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.681 --rc genhtml_branch_coverage=1 00:07:01.681 --rc genhtml_function_coverage=1 00:07:01.681 --rc genhtml_legend=1 00:07:01.681 --rc geninfo_all_blocks=1 00:07:01.681 --rc geninfo_unexecuted_blocks=1 00:07:01.681 00:07:01.681 ' 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.681 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.682 09:45:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.255 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.256 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.256 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.256 09:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.256 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:07:08.257 00:07:08.257 --- 10.0.0.2 ping statistics --- 00:07:08.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.257 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:07:08.257 00:07:08.257 --- 10.0.0.1 ping statistics --- 00:07:08.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.257 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2506533 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2506533 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2506533 ']' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.257 [2024-11-20 09:45:41.335344] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:08.257 [2024-11-20 09:45:41.335388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.257 [2024-11-20 09:45:41.409239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.257 [2024-11-20 09:45:41.460105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.257 [2024-11-20 09:45:41.460150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.257 [2024-11-20 09:45:41.460162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.257 [2024-11-20 09:45:41.460171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.257 [2024-11-20 09:45:41.460179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.257 [2024-11-20 09:45:41.460926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:08.257 [2024-11-20 09:45:41.771662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.257 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.515 ************************************ 00:07:08.515 START TEST lvs_grow_clean 00:07:08.515 ************************************ 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.515 09:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.515 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:08.515 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:08.772 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b8ffa991-2560-490f-876b-9680f2da655d 00:07:08.772 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:08.772 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:09.029 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:09.029 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:09.029 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b8ffa991-2560-490f-876b-9680f2da655d lvol 150 00:07:09.287 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 00:07:09.287 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.287 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:09.287 [2024-11-20 09:45:42.807886] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:09.287 [2024-11-20 09:45:42.807937] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:09.287 true 00:07:09.287 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:09.287 09:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:09.545 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:09.545 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.803 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 00:07:10.062 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:10.062 [2024-11-20 09:45:43.558155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.062 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2507032 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2507032 /var/tmp/bdevperf.sock 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2507032 ']' 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.320 09:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.320 [2024-11-20 09:45:43.792830] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:10.320 [2024-11-20 09:45:43.792875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507032 ] 00:07:10.320 [2024-11-20 09:45:43.866730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.579 [2024-11-20 09:45:43.907159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.579 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.579 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:10.579 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:10.837 Nvme0n1 00:07:10.837 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:11.094 [ 00:07:11.094 { 00:07:11.094 "name": "Nvme0n1", 00:07:11.094 "aliases": [ 00:07:11.094 "569a1f94-ad11-4a8c-a43d-cb9c1d55cc69" 00:07:11.094 ], 00:07:11.094 "product_name": "NVMe disk", 00:07:11.094 "block_size": 4096, 00:07:11.094 "num_blocks": 38912, 00:07:11.094 "uuid": "569a1f94-ad11-4a8c-a43d-cb9c1d55cc69", 00:07:11.094 "numa_id": 1, 00:07:11.094 "assigned_rate_limits": { 00:07:11.094 "rw_ios_per_sec": 0, 00:07:11.094 "rw_mbytes_per_sec": 0, 00:07:11.094 "r_mbytes_per_sec": 0, 00:07:11.094 "w_mbytes_per_sec": 0 00:07:11.094 }, 00:07:11.094 "claimed": false, 00:07:11.094 "zoned": false, 00:07:11.094 "supported_io_types": { 00:07:11.094 "read": true, 00:07:11.094 "write": true, 00:07:11.094 "unmap": true, 00:07:11.094 "flush": true, 00:07:11.094 "reset": true, 00:07:11.094 "nvme_admin": true, 00:07:11.094 "nvme_io": true, 00:07:11.094 "nvme_io_md": false, 00:07:11.094 "write_zeroes": true, 00:07:11.094 "zcopy": false, 00:07:11.094 "get_zone_info": false, 00:07:11.094 "zone_management": false, 00:07:11.094 "zone_append": false, 00:07:11.094 "compare": true, 00:07:11.094 "compare_and_write": true, 00:07:11.094 "abort": true, 00:07:11.094 "seek_hole": false, 00:07:11.094 "seek_data": false, 00:07:11.094 "copy": true, 00:07:11.094 "nvme_iov_md": false 00:07:11.094 }, 00:07:11.094 "memory_domains": [ 00:07:11.094 { 00:07:11.094 "dma_device_id": "system", 00:07:11.094 "dma_device_type": 1 00:07:11.094 } 00:07:11.094 ], 00:07:11.095 "driver_specific": { 00:07:11.095 "nvme": [ 00:07:11.095 { 00:07:11.095 "trid": { 00:07:11.095 "trtype": "TCP", 00:07:11.095 "adrfam": "IPv4", 00:07:11.095 "traddr": "10.0.0.2", 00:07:11.095 "trsvcid": "4420", 00:07:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:11.095 }, 00:07:11.095 "ctrlr_data": { 00:07:11.095 "cntlid": 1, 00:07:11.095 "vendor_id": "0x8086", 00:07:11.095 "model_number": "SPDK bdev Controller", 00:07:11.095 "serial_number": "SPDK0", 00:07:11.095 "firmware_revision": "25.01", 00:07:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.095 "oacs": { 00:07:11.095 "security": 0, 00:07:11.095 "format": 0, 00:07:11.095 "firmware": 0, 00:07:11.095 "ns_manage": 0 00:07:11.095 }, 00:07:11.095 "multi_ctrlr": true, 00:07:11.095 "ana_reporting": false 00:07:11.095 }, 00:07:11.095 "vs": { 00:07:11.095 "nvme_version": "1.3" 00:07:11.095 }, 00:07:11.095 "ns_data": { 00:07:11.095 "id": 1, 00:07:11.095 "can_share": true 00:07:11.095 } 00:07:11.095 } 00:07:11.095 ], 00:07:11.095 "mp_policy": "active_passive" 00:07:11.095 } 00:07:11.095 } 00:07:11.095 ] 00:07:11.095 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2507060 00:07:11.095 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.095 09:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:11.095 Running I/O for 10 seconds... 00:07:12.468 Latency(us) 00:07:12.468 [2024-11-20T08:45:46.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.468 Nvme0n1 : 1.00 23582.00 92.12 0.00 0.00 0.00 0.00 0.00 00:07:12.468 [2024-11-20T08:45:46.050Z] =================================================================================================================== 00:07:12.468 [2024-11-20T08:45:46.050Z] Total : 23582.00 92.12 0.00 0.00 0.00 0.00 0.00 00:07:12.468 00:07:13.034 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:13.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.293 Nvme0n1 : 2.00 23675.00 92.48 0.00 0.00 0.00 0.00 0.00 00:07:13.293 [2024-11-20T08:45:46.875Z] =================================================================================================================== 00:07:13.293 [2024-11-20T08:45:46.875Z] Total : 23675.00 92.48 0.00 0.00 0.00 0.00 0.00 00:07:13.293 00:07:13.293 true 00:07:13.293 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:13.293 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:13.551 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:13.551 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:13.551 09:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2507060 00:07:14.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.117 Nvme0n1 : 3.00 23729.00 92.69 0.00 0.00 0.00 0.00 0.00 00:07:14.117 [2024-11-20T08:45:47.699Z] =================================================================================================================== 00:07:14.117 [2024-11-20T08:45:47.699Z] Total : 23729.00 92.69 0.00 0.00 0.00 0.00 0.00 00:07:14.117 00:07:15.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.492 Nvme0n1 : 4.00 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:07:15.492 [2024-11-20T08:45:49.074Z] =================================================================================================================== 00:07:15.492 [2024-11-20T08:45:49.074Z] Total : 23788.50 92.92 0.00 0.00 0.00 0.00 0.00 00:07:15.492 00:07:16.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.426 Nvme0n1 : 5.00 23823.60 93.06 0.00 0.00 0.00 0.00 0.00 00:07:16.426 [2024-11-20T08:45:50.008Z] =================================================================================================================== 00:07:16.426 [2024-11-20T08:45:50.008Z] Total : 23823.60 93.06 0.00 0.00 0.00 0.00 0.00 00:07:16.426 00:07:17.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.360 Nvme0n1 : 6.00 23812.50 93.02 0.00 0.00 0.00 0.00 0.00 00:07:17.360 [2024-11-20T08:45:50.942Z] =================================================================================================================== 00:07:17.360 [2024-11-20T08:45:50.942Z] Total : 23812.50 93.02 0.00 0.00 0.00 0.00 0.00 00:07:17.360 00:07:18.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.294 Nvme0n1 : 7.00 23834.29 93.10 0.00 0.00 0.00 0.00 0.00 00:07:18.294 [2024-11-20T08:45:51.876Z] =================================================================================================================== 00:07:18.294 [2024-11-20T08:45:51.876Z] Total : 23834.29 93.10 0.00 0.00 0.00 0.00 0.00 00:07:18.294 00:07:19.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.228 Nvme0n1 : 8.00 23851.38 93.17 0.00 0.00 0.00 0.00 0.00 00:07:19.228 [2024-11-20T08:45:52.810Z] =================================================================================================================== 00:07:19.228 [2024-11-20T08:45:52.810Z] Total : 23851.38 93.17 0.00 0.00 0.00 0.00 0.00 00:07:19.228 00:07:20.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.163 Nvme0n1 : 9.00 23870.67 93.24 0.00 0.00 0.00 0.00 0.00 00:07:20.163 [2024-11-20T08:45:53.745Z] =================================================================================================================== 00:07:20.163 [2024-11-20T08:45:53.745Z] Total : 23870.67 93.24 0.00 0.00 0.00 0.00 0.00 00:07:20.163 00:07:21.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.099 Nvme0n1 : 10.00 23882.80 93.29 0.00 0.00 0.00 0.00 0.00 00:07:21.099 [2024-11-20T08:45:54.681Z] =================================================================================================================== 00:07:21.099 [2024-11-20T08:45:54.681Z] Total : 23882.80 93.29 0.00 0.00 0.00 0.00 0.00 00:07:21.099 00:07:21.099 00:07:21.099 Latency(us) 00:07:21.099 [2024-11-20T08:45:54.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.099 Nvme0n1 : 10.01 23881.14 93.29 0.00 0.00 5356.67 1435.55 9861.61 00:07:21.099 [2024-11-20T08:45:54.681Z] =================================================================================================================== 00:07:21.099 [2024-11-20T08:45:54.681Z] Total : 23881.14 93.29 0.00 0.00 5356.67 1435.55 9861.61 00:07:21.099 { 00:07:21.099 "results": [ 00:07:21.099 { 00:07:21.099 "job": "Nvme0n1", 00:07:21.099 "core_mask": "0x2", 00:07:21.099 "workload": "randwrite", 00:07:21.099 "status": "finished", 00:07:21.099 "queue_depth": 128, 00:07:21.099 "io_size": 4096, 00:07:21.099 "runtime": 10.005342, 00:07:21.099 "iops": 23881.142693573092, 00:07:21.099 "mibps": 93.28571364676989, 00:07:21.099 "io_failed": 0, 00:07:21.099 "io_timeout": 0, 00:07:21.099 "avg_latency_us": 5356.674933833481, 00:07:21.099 "min_latency_us": 1435.5504761904763, 00:07:21.099 "max_latency_us": 9861.60761904762 00:07:21.099 } 00:07:21.099 ], 00:07:21.099 "core_count": 1 00:07:21.099 } 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2507032 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2507032 ']' 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2507032 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507032 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507032' 00:07:21.357 killing process with pid 2507032 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2507032 00:07:21.357 Received shutdown signal, test time was about 10.000000 seconds 00:07:21.357 00:07:21.357 Latency(us) 00:07:21.357 [2024-11-20T08:45:54.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.357 [2024-11-20T08:45:54.939Z] =================================================================================================================== 00:07:21.357 [2024-11-20T08:45:54.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2507032 00:07:21.357 09:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.616 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.874 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:21.874 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:22.131 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:22.131 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:22.131 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.131 [2024-11-20 09:45:55.642967] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.131 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.132 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:22.390 request: 00:07:22.390 { 00:07:22.390 "uuid": "b8ffa991-2560-490f-876b-9680f2da655d", 00:07:22.390 "method": "bdev_lvol_get_lvstores", 00:07:22.390 "req_id": 1 00:07:22.390 } 00:07:22.390 Got JSON-RPC error response 00:07:22.390 response: 00:07:22.390 { 00:07:22.390 "code": -19, 00:07:22.390 "message": "No such device" 00:07:22.390 } 00:07:22.390 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:22.390 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.390 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.390 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.390 09:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.648 aio_bdev 00:07:22.648 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 00:07:22.648 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 00:07:22.648 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.649 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:22.649 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.649 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.649 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.649 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 -t 2000 00:07:22.907 [ 00:07:22.907 { 00:07:22.907 "name": "569a1f94-ad11-4a8c-a43d-cb9c1d55cc69", 00:07:22.907 "aliases": [ 00:07:22.907 "lvs/lvol" 00:07:22.907 ], 00:07:22.907 "product_name": "Logical Volume", 00:07:22.907 "block_size": 4096, 00:07:22.907 "num_blocks": 38912, 00:07:22.907 "uuid": "569a1f94-ad11-4a8c-a43d-cb9c1d55cc69", 00:07:22.907 "assigned_rate_limits": { 00:07:22.907 "rw_ios_per_sec": 0, 00:07:22.907 "rw_mbytes_per_sec": 0, 00:07:22.907 "r_mbytes_per_sec": 0, 00:07:22.907 "w_mbytes_per_sec": 0 00:07:22.907 }, 00:07:22.907 "claimed": false, 00:07:22.907 "zoned": false, 00:07:22.907 "supported_io_types": { 00:07:22.907 "read": true, 00:07:22.907 "write": true, 00:07:22.907 "unmap": true, 00:07:22.907 "flush": false, 00:07:22.907 "reset": true, 00:07:22.907 "nvme_admin": false, 00:07:22.907 "nvme_io": false, 00:07:22.907 "nvme_io_md": false, 00:07:22.907 "write_zeroes": true, 00:07:22.907 "zcopy": false, 00:07:22.907 "get_zone_info": false, 00:07:22.907 "zone_management": false, 00:07:22.907 "zone_append": false, 00:07:22.907 "compare": false, 00:07:22.907 "compare_and_write": false, 00:07:22.907 "abort": false, 00:07:22.907 "seek_hole": true, 00:07:22.907 "seek_data": true, 00:07:22.907 "copy": false, 00:07:22.907 "nvme_iov_md": false 00:07:22.907 }, 00:07:22.907 "driver_specific": { 00:07:22.907 "lvol": { 00:07:22.907 "lvol_store_uuid": "b8ffa991-2560-490f-876b-9680f2da655d", 00:07:22.907 "base_bdev": "aio_bdev", 00:07:22.907 "thin_provision": false, 00:07:22.907 "num_allocated_clusters": 38, 00:07:22.907 "snapshot": false, 00:07:22.907 "clone": false, 00:07:22.907 "esnap_clone": false 00:07:22.907 } 00:07:22.907 } 00:07:22.907 } 00:07:22.907 ] 00:07:22.907 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:22.907 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:22.907 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.165 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.165 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:23.165 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.423 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.423 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 569a1f94-ad11-4a8c-a43d-cb9c1d55cc69 00:07:23.424 09:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8ffa991-2560-490f-876b-9680f2da655d 00:07:23.682 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.940 00:07:23.940 real 0m15.525s 00:07:23.940 user 0m15.078s 00:07:23.940 sys 0m1.457s 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:23.940 ************************************ 00:07:23.940 END TEST lvs_grow_clean 00:07:23.940 ************************************ 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.940 ************************************ 00:07:23.940 START TEST lvs_grow_dirty 00:07:23.940 ************************************ 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.940 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.198 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:24.198 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:24.456 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=de6dfd62-ef95-46a1-b702-d922d1563017 00:07:24.456 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:24.456 09:45:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u de6dfd62-ef95-46a1-b702-d922d1563017 lvol 150 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7d92778-1379-4136-a61c-e99209ec65b5 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.714 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.972 [2024-11-20 09:45:58.403181] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.972 [2024-11-20 09:45:58.403257] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.972 true 00:07:24.972 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:24.972 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:25.230 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:25.230 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.230 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7d92778-1379-4136-a61c-e99209ec65b5 00:07:25.488 09:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.745 [2024-11-20 09:45:59.149418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.745 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2509640 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2509640 /var/tmp/bdevperf.sock 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2509640 ']' 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.003 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.003 [2024-11-20 09:45:59.378099] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:26.003 [2024-11-20 09:45:59.378142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509640 ] 00:07:26.003 [2024-11-20 09:45:59.452604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.003 [2024-11-20 09:45:59.495118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.261 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.261 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:26.261 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:26.519 Nvme0n1 00:07:26.519 09:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.519 [ 00:07:26.519 { 00:07:26.519 "name": "Nvme0n1", 00:07:26.519 "aliases": [ 00:07:26.519 "e7d92778-1379-4136-a61c-e99209ec65b5" 00:07:26.519 ], 00:07:26.519 "product_name": "NVMe disk", 00:07:26.519 "block_size": 4096, 00:07:26.519 "num_blocks": 38912, 00:07:26.519 "uuid": "e7d92778-1379-4136-a61c-e99209ec65b5", 00:07:26.519 "numa_id": 1, 00:07:26.519 "assigned_rate_limits": { 00:07:26.519 "rw_ios_per_sec": 0, 00:07:26.519 "rw_mbytes_per_sec": 0, 00:07:26.519 "r_mbytes_per_sec": 0, 00:07:26.519 "w_mbytes_per_sec": 0 00:07:26.519 }, 00:07:26.519 "claimed": false, 00:07:26.519 "zoned": false, 00:07:26.519 "supported_io_types": { 00:07:26.519 "read": true, 00:07:26.519 "write": true, 00:07:26.519 "unmap": true, 00:07:26.519 "flush": true, 00:07:26.519 "reset": true, 00:07:26.519 "nvme_admin": true, 00:07:26.519 "nvme_io": true, 00:07:26.519 "nvme_io_md": false, 00:07:26.519 "write_zeroes": true, 00:07:26.519 "zcopy": false, 00:07:26.519 "get_zone_info": false, 00:07:26.519 "zone_management": false, 00:07:26.519 "zone_append": false, 00:07:26.519 "compare": true, 00:07:26.519 "compare_and_write": true, 00:07:26.519 "abort": true, 00:07:26.519 "seek_hole": false, 00:07:26.519 "seek_data": false, 00:07:26.519 "copy": true, 00:07:26.519 "nvme_iov_md": false 00:07:26.519 }, 00:07:26.519 "memory_domains": [ 00:07:26.519 { 00:07:26.519 "dma_device_id": "system", 00:07:26.519 "dma_device_type": 1 00:07:26.519 } 00:07:26.519 ], 00:07:26.519 "driver_specific": { 00:07:26.519 "nvme": [ 00:07:26.519 { 00:07:26.519 "trid": { 00:07:26.519 "trtype": "TCP", 00:07:26.519 "adrfam": "IPv4", 00:07:26.519 "traddr": "10.0.0.2", 00:07:26.519 "trsvcid": "4420", 00:07:26.519 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.519 }, 00:07:26.519 "ctrlr_data": { 00:07:26.519 "cntlid": 1, 00:07:26.519 "vendor_id": "0x8086", 00:07:26.519 "model_number": "SPDK bdev Controller", 00:07:26.519 "serial_number": "SPDK0", 00:07:26.519 "firmware_revision": "25.01", 00:07:26.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.519 "oacs": { 00:07:26.519 "security": 0, 00:07:26.519 "format": 0, 00:07:26.519 "firmware": 0, 00:07:26.519 "ns_manage": 0 00:07:26.519 }, 00:07:26.519 "multi_ctrlr": true, 00:07:26.519 "ana_reporting": false 00:07:26.519 }, 00:07:26.519 "vs": { 00:07:26.519 "nvme_version": "1.3" 00:07:26.519 }, 00:07:26.519 "ns_data": { 00:07:26.519 "id": 1, 00:07:26.519 "can_share": true 00:07:26.519 } 00:07:26.519 } 00:07:26.519 ], 00:07:26.519 "mp_policy": "active_passive" 00:07:26.519 } 00:07:26.519 } 00:07:26.519 ] 00:07:26.519 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.519 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2509835 00:07:26.519 09:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.777 Running I/O for 10 seconds... 00:07:27.711 Latency(us) 00:07:27.711 [2024-11-20T08:46:01.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.711 Nvme0n1 : 1.00 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:07:27.711 [2024-11-20T08:46:01.293Z] =================================================================================================================== 00:07:27.711 [2024-11-20T08:46:01.293Z] Total : 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:07:27.711 00:07:28.645 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:28.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.645 Nvme0n1 : 2.00 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:07:28.645 [2024-11-20T08:46:02.227Z] =================================================================================================================== 00:07:28.645 [2024-11-20T08:46:02.227Z] Total : 23594.00 92.16 0.00 0.00 0.00 0.00 0.00 00:07:28.645 00:07:28.903 true 00:07:28.903 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:28.903 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:29.162 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:29.162 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:29.162 09:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2509835 00:07:29.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.728 Nvme0n1 : 3.00 23662.67 92.43 0.00 0.00 0.00 0.00 0.00 00:07:29.728 [2024-11-20T08:46:03.310Z] =================================================================================================================== 00:07:29.728 [2024-11-20T08:46:03.310Z] Total : 23662.67 92.43 0.00 0.00 0.00 0.00 0.00 00:07:29.728 00:07:30.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.662 Nvme0n1 : 4.00 23751.00 92.78 0.00 0.00 0.00 0.00 0.00 00:07:30.662 [2024-11-20T08:46:04.244Z] =================================================================================================================== 00:07:30.662 [2024-11-20T08:46:04.244Z] Total : 23751.00 92.78 0.00 0.00 0.00 0.00 0.00 00:07:30.662 00:07:31.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.594 Nvme0n1 : 5.00 23795.60 92.95 0.00 0.00 0.00 0.00 0.00 00:07:31.594 [2024-11-20T08:46:05.176Z] =================================================================================================================== 00:07:31.594 [2024-11-20T08:46:05.176Z] Total : 23795.60 92.95 0.00 0.00 0.00 0.00 0.00 00:07:31.594 00:07:32.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.965 Nvme0n1 : 6.00 23834.50 93.10 0.00 0.00 0.00 0.00 0.00 00:07:32.965 [2024-11-20T08:46:06.547Z] =================================================================================================================== 00:07:32.965 [2024-11-20T08:46:06.547Z] Total : 23834.50 93.10 0.00 0.00 0.00 0.00 0.00 00:07:32.965 00:07:33.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.898 Nvme0n1 : 7.00 23867.00 93.23 0.00 0.00 0.00 0.00 0.00 00:07:33.898 [2024-11-20T08:46:07.480Z] =================================================================================================================== 00:07:33.898 [2024-11-20T08:46:07.480Z] Total : 23867.00 93.23 0.00 0.00 0.00 0.00 0.00 00:07:33.898 00:07:34.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.831 Nvme0n1 : 8.00 23879.38 93.28 0.00 0.00 0.00 0.00 0.00 00:07:34.831 [2024-11-20T08:46:08.413Z] =================================================================================================================== 00:07:34.831 [2024-11-20T08:46:08.413Z] Total : 23879.38 93.28 0.00 0.00 0.00 0.00 0.00 00:07:34.831 00:07:35.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.775 Nvme0n1 : 9.00 23905.11 93.38 0.00 0.00 0.00 0.00 0.00 00:07:35.775 [2024-11-20T08:46:09.357Z] =================================================================================================================== 00:07:35.775 [2024-11-20T08:46:09.357Z] Total : 23905.11 93.38 0.00 0.00 0.00 0.00 0.00 00:07:35.775 00:07:36.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.741 Nvme0n1 : 10.00 23897.10 93.35 0.00 0.00 0.00 0.00 0.00 00:07:36.741 [2024-11-20T08:46:10.323Z] =================================================================================================================== 00:07:36.741 [2024-11-20T08:46:10.323Z] Total : 23897.10 93.35 0.00 0.00 0.00 0.00 0.00 00:07:36.741 00:07:36.741 00:07:36.741 Latency(us) 00:07:36.741 [2024-11-20T08:46:10.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.741 Nvme0n1 : 10.01 23897.31 93.35 0.00 0.00 5352.84 3151.97 12607.88 00:07:36.741 [2024-11-20T08:46:10.323Z] =================================================================================================================== 00:07:36.741 [2024-11-20T08:46:10.323Z] Total : 23897.31 93.35 0.00 0.00 5352.84 3151.97 12607.88 00:07:36.741 { 00:07:36.741 "results": [ 00:07:36.741 { 00:07:36.741 "job": "Nvme0n1", 00:07:36.741 "core_mask": "0x2", 00:07:36.741 "workload": "randwrite", 00:07:36.741 "status": "finished", 00:07:36.741 "queue_depth": 128, 00:07:36.741 "io_size": 4096, 00:07:36.741 "runtime": 10.00527, 00:07:36.741 "iops": 23897.30611967493, 00:07:36.741 "mibps": 93.3488520299802, 00:07:36.741 "io_failed": 0, 00:07:36.741 "io_timeout": 0, 00:07:36.741 "avg_latency_us": 5352.83572851174, 00:07:36.741 "min_latency_us": 3151.9695238095237, 00:07:36.741 "max_latency_us": 12607.878095238095 00:07:36.741 } 00:07:36.741 ], 00:07:36.741 "core_count": 1 00:07:36.741 } 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2509640 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2509640 ']' 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2509640 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509640 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509640' 00:07:36.741 killing process with pid 2509640 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2509640 00:07:36.741 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.741 00:07:36.741 Latency(us) 00:07:36.741 [2024-11-20T08:46:10.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.741 [2024-11-20T08:46:10.323Z] =================================================================================================================== 00:07:36.741 [2024-11-20T08:46:10.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.741 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2509640 00:07:37.027 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.325 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.325 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:37.325 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.608 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.608 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:37.608 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2506533 00:07:37.608 09:46:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2506533 00:07:37.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2506533 Killed "${NVMF_APP[@]}" "$@" 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2511599 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2511599 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2511599 ']' 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.608 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.608 [2024-11-20 09:46:11.086095] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:37.608 [2024-11-20 09:46:11.086142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.608 [2024-11-20 09:46:11.169228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.867 [2024-11-20 09:46:11.210567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.867 [2024-11-20 09:46:11.210603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.867 [2024-11-20 09:46:11.210613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.867 [2024-11-20 09:46:11.210618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.867 [2024-11-20 09:46:11.210623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.867 [2024-11-20 09:46:11.211186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.432 09:46:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.689 [2024-11-20 09:46:12.106173] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:38.689 [2024-11-20 09:46:12.106258] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:38.689 [2024-11-20 09:46:12.106283] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e7d92778-1379-4136-a61c-e99209ec65b5 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e7d92778-1379-4136-a61c-e99209ec65b5 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.689 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.947 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7d92778-1379-4136-a61c-e99209ec65b5 -t 2000 00:07:38.947 [ 00:07:38.947 { 00:07:38.947 "name": "e7d92778-1379-4136-a61c-e99209ec65b5", 00:07:38.947 "aliases": [ 00:07:38.947 "lvs/lvol" 00:07:38.947 ], 00:07:38.947 "product_name": "Logical Volume", 00:07:38.947 "block_size": 4096, 00:07:38.947 "num_blocks": 38912, 00:07:38.947 "uuid": "e7d92778-1379-4136-a61c-e99209ec65b5", 00:07:38.947 "assigned_rate_limits": { 00:07:38.947 "rw_ios_per_sec": 0, 00:07:38.947 "rw_mbytes_per_sec": 0, 00:07:38.947 "r_mbytes_per_sec": 0, 00:07:38.947 "w_mbytes_per_sec": 0 00:07:38.947 }, 00:07:38.947 "claimed": false, 00:07:38.947 "zoned": false, 00:07:38.947 "supported_io_types": { 00:07:38.947 "read": true, 00:07:38.947 "write": true, 00:07:38.947 "unmap": true, 00:07:38.947 "flush": false, 00:07:38.947 "reset": true, 00:07:38.947 "nvme_admin": false, 00:07:38.947 "nvme_io": false, 00:07:38.947 "nvme_io_md": false, 00:07:38.947 "write_zeroes": true, 00:07:38.947 "zcopy": false, 00:07:38.948 "get_zone_info": false, 00:07:38.948 "zone_management": false, 00:07:38.948 "zone_append": false, 00:07:38.948 "compare": false, 00:07:38.948 "compare_and_write": false, 00:07:38.948 "abort": false, 00:07:38.948 "seek_hole": true, 00:07:38.948 "seek_data": true, 00:07:38.948 "copy": false, 00:07:38.948 "nvme_iov_md": false 00:07:38.948 }, 00:07:38.948 "driver_specific": { 00:07:38.948 "lvol": { 00:07:38.948 "lvol_store_uuid": "de6dfd62-ef95-46a1-b702-d922d1563017", 00:07:38.948 "base_bdev": "aio_bdev", 00:07:38.948 "thin_provision": false, 00:07:38.948 "num_allocated_clusters": 38, 00:07:38.948 "snapshot": false, 00:07:38.948 "clone": false, 00:07:38.948 "esnap_clone": false 00:07:38.948 } 00:07:38.948 } 00:07:38.948 } 00:07:38.948 ] 00:07:38.948 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:38.948 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:38.948 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:39.205 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:39.205 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:39.205 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:39.463 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:39.463 09:46:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.721 [2024-11-20 09:46:13.047100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:39.721 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:39.721 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:39.722 request: 00:07:39.722 { 00:07:39.722 "uuid": "de6dfd62-ef95-46a1-b702-d922d1563017", 00:07:39.722 "method": "bdev_lvol_get_lvstores", 00:07:39.722 "req_id": 1 00:07:39.722 } 00:07:39.722 Got JSON-RPC error response 00:07:39.722 response: 00:07:39.722 { 00:07:39.722 "code": -19, 00:07:39.722 "message": "No such device" 00:07:39.722 } 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.722 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.980 aio_bdev 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7d92778-1379-4136-a61c-e99209ec65b5 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e7d92778-1379-4136-a61c-e99209ec65b5 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.980 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.237 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7d92778-1379-4136-a61c-e99209ec65b5 -t 2000 00:07:40.237 [ 00:07:40.238 { 00:07:40.238 "name": "e7d92778-1379-4136-a61c-e99209ec65b5", 00:07:40.238 "aliases": [ 00:07:40.238 "lvs/lvol" 00:07:40.238 ], 00:07:40.238 "product_name": "Logical Volume", 00:07:40.238 "block_size": 4096, 00:07:40.238 "num_blocks": 38912, 00:07:40.238 "uuid": "e7d92778-1379-4136-a61c-e99209ec65b5", 00:07:40.238 "assigned_rate_limits": { 00:07:40.238 "rw_ios_per_sec": 0, 00:07:40.238 "rw_mbytes_per_sec": 0, 00:07:40.238 "r_mbytes_per_sec": 0, 00:07:40.238 "w_mbytes_per_sec": 0 00:07:40.238 }, 00:07:40.238 "claimed": false, 00:07:40.238 "zoned": false, 00:07:40.238 "supported_io_types": { 00:07:40.238 "read": true, 00:07:40.238 "write": true, 00:07:40.238 "unmap": true, 00:07:40.238 "flush": false, 00:07:40.238 "reset": true, 00:07:40.238 "nvme_admin": false, 00:07:40.238 "nvme_io": false, 00:07:40.238 "nvme_io_md": false, 00:07:40.238 "write_zeroes": true, 00:07:40.238 "zcopy": false, 00:07:40.238 "get_zone_info": false, 00:07:40.238 "zone_management": false, 00:07:40.238 "zone_append": false, 00:07:40.238 "compare": false, 00:07:40.238 "compare_and_write": false, 00:07:40.238 "abort": false, 00:07:40.238 "seek_hole": true, 00:07:40.238 "seek_data": true, 00:07:40.238 "copy": false, 00:07:40.238 "nvme_iov_md": false 00:07:40.238 }, 00:07:40.238 "driver_specific": { 00:07:40.238 "lvol": { 00:07:40.238 "lvol_store_uuid": "de6dfd62-ef95-46a1-b702-d922d1563017", 00:07:40.238 "base_bdev": "aio_bdev", 00:07:40.238 "thin_provision": false, 00:07:40.238 "num_allocated_clusters": 38, 00:07:40.238 "snapshot": false, 00:07:40.238 "clone": false, 00:07:40.238 "esnap_clone": false 00:07:40.238 } 00:07:40.238 } 00:07:40.238 } 00:07:40.238 ] 00:07:40.496 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:40.496 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:40.496 09:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:40.496 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:40.496 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:40.496 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.754 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.754 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7d92778-1379-4136-a61c-e99209ec65b5 00:07:41.013 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u de6dfd62-ef95-46a1-b702-d922d1563017 00:07:41.013 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.271 00:07:41.271 real 0m17.349s 00:07:41.271 user 0m43.579s 00:07:41.271 sys 0m3.725s 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 ************************************ 00:07:41.271 END TEST lvs_grow_dirty 00:07:41.271 ************************************ 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:41.271 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:41.271 nvmf_trace.0 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.530 rmmod nvme_tcp 00:07:41.530 rmmod nvme_fabrics 00:07:41.530 rmmod nvme_keyring 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2511599 ']' 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2511599 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2511599 ']' 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2511599 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511599 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511599' 00:07:41.530 killing process with pid 2511599 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2511599 00:07:41.530 09:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2511599 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.789 09:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.691 00:07:43.691 real 0m42.186s 00:07:43.691 user 1m4.859s 00:07:43.691 sys 0m10.190s 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.691 ************************************ 00:07:43.691 END TEST nvmf_lvs_grow 00:07:43.691 ************************************ 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.691 09:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.951 ************************************ 00:07:43.951 START TEST nvmf_bdev_io_wait 00:07:43.951 ************************************ 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.951 * Looking for test storage... 00:07:43.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.951 --rc genhtml_branch_coverage=1 00:07:43.951 --rc genhtml_function_coverage=1 00:07:43.951 --rc genhtml_legend=1 00:07:43.951 --rc geninfo_all_blocks=1 00:07:43.951 --rc geninfo_unexecuted_blocks=1 00:07:43.951 00:07:43.951 ' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.951 --rc genhtml_branch_coverage=1 00:07:43.951 --rc genhtml_function_coverage=1 00:07:43.951 --rc genhtml_legend=1 00:07:43.951 --rc geninfo_all_blocks=1 00:07:43.951 --rc geninfo_unexecuted_blocks=1 00:07:43.951 00:07:43.951 ' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.951 --rc genhtml_branch_coverage=1 00:07:43.951 --rc genhtml_function_coverage=1 00:07:43.951 --rc genhtml_legend=1 00:07:43.951 --rc geninfo_all_blocks=1 00:07:43.951 --rc geninfo_unexecuted_blocks=1 00:07:43.951 00:07:43.951 ' 00:07:43.951 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.952 --rc genhtml_branch_coverage=1 00:07:43.952 --rc genhtml_function_coverage=1 00:07:43.952 --rc genhtml_legend=1 00:07:43.952 --rc geninfo_all_blocks=1 00:07:43.952 --rc geninfo_unexecuted_blocks=1 00:07:43.952 00:07:43.952 ' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.952 09:46:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:50.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:50.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:50.520 Found net devices under 0000:86:00.0: cvl_0_0 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:50.520 Found net devices under 0000:86:00.1: cvl_0_1 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.520 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:07:50.521 00:07:50.521 --- 10.0.0.2 ping statistics --- 00:07:50.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.521 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:50.521 00:07:50.521 --- 10.0.0.1 ping statistics --- 00:07:50.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.521 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2515789 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2515789 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2515789 ']' 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.521 09:46:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.521 [2024-11-20 09:46:23.581769] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:50.521 [2024-11-20 09:46:23.581818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.521 [2024-11-20 09:46:23.662907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.521 [2024-11-20 09:46:23.706478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.521 [2024-11-20 09:46:23.706518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.521 [2024-11-20 09:46:23.706525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.521 [2024-11-20 09:46:23.706531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.521 [2024-11-20 09:46:23.706536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.521 [2024-11-20 09:46:23.708095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.521 [2024-11-20 09:46:23.708217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.521 [2024-11-20 09:46:23.708313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.521 [2024-11-20 09:46:23.708314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 [2024-11-20 09:46:24.526305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 Malloc0 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.088 [2024-11-20 09:46:24.581640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2516042 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2516044 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.088 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.088 { 00:07:51.088 "params": { 00:07:51.088 "name": "Nvme$subsystem", 00:07:51.088 "trtype": "$TEST_TRANSPORT", 00:07:51.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.088 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "$NVMF_PORT", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.089 "hdgst": ${hdgst:-false}, 00:07:51.089 "ddgst": ${ddgst:-false} 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 } 00:07:51.089 EOF 00:07:51.089 )") 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2516046 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.089 { 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme$subsystem", 00:07:51.089 "trtype": "$TEST_TRANSPORT", 00:07:51.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "$NVMF_PORT", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.089 "hdgst": ${hdgst:-false}, 00:07:51.089 "ddgst": ${ddgst:-false} 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 } 00:07:51.089 EOF 00:07:51.089 )") 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2516049 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.089 { 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme$subsystem", 00:07:51.089 "trtype": "$TEST_TRANSPORT", 00:07:51.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "$NVMF_PORT", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.089 "hdgst": ${hdgst:-false}, 00:07:51.089 "ddgst": ${ddgst:-false} 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 } 00:07:51.089 EOF 00:07:51.089 )") 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.089 { 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme$subsystem", 00:07:51.089 "trtype": "$TEST_TRANSPORT", 00:07:51.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "$NVMF_PORT", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.089 "hdgst": ${hdgst:-false}, 00:07:51.089 "ddgst": ${ddgst:-false} 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 } 00:07:51.089 EOF 00:07:51.089 )") 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2516042 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme1", 00:07:51.089 "trtype": "tcp", 00:07:51.089 "traddr": "10.0.0.2", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "4420", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.089 "hdgst": false, 00:07:51.089 "ddgst": false 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 }' 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme1", 00:07:51.089 "trtype": "tcp", 00:07:51.089 "traddr": "10.0.0.2", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "4420", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.089 "hdgst": false, 00:07:51.089 "ddgst": false 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 }' 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme1", 00:07:51.089 "trtype": "tcp", 00:07:51.089 "traddr": "10.0.0.2", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "4420", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.089 "hdgst": false, 00:07:51.089 "ddgst": false 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 }' 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.089 09:46:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.089 "params": { 00:07:51.089 "name": "Nvme1", 00:07:51.089 "trtype": "tcp", 00:07:51.089 "traddr": "10.0.0.2", 00:07:51.089 "adrfam": "ipv4", 00:07:51.089 "trsvcid": "4420", 00:07:51.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.089 "hdgst": false, 00:07:51.089 "ddgst": false 00:07:51.089 }, 00:07:51.089 "method": "bdev_nvme_attach_controller" 00:07:51.089 }' 00:07:51.089 [2024-11-20 09:46:24.625756] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:51.089 [2024-11-20 09:46:24.625801] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:51.089 [2024-11-20 09:46:24.632740] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:51.089 [2024-11-20 09:46:24.632787] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:51.089 [2024-11-20 09:46:24.635191] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:51.089 [2024-11-20 09:46:24.635235] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:51.089 [2024-11-20 09:46:24.638815] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:07:51.089 [2024-11-20 09:46:24.638858] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:51.348 [2024-11-20 09:46:24.773410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.348 [2024-11-20 09:46:24.807672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:51.348 [2024-11-20 09:46:24.873611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.348 [2024-11-20 09:46:24.915973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.605 [2024-11-20 09:46:24.971617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.605 [2024-11-20 09:46:25.024558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.605 [2024-11-20 09:46:25.027143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:51.605 [2024-11-20 09:46:25.064408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:51.605 Running I/O for 1 seconds... 00:07:51.862 Running I/O for 1 seconds... 00:07:51.862 Running I/O for 1 seconds... 00:07:51.862 Running I/O for 1 seconds... 00:07:52.795 12894.00 IOPS, 50.37 MiB/s 00:07:52.795 Latency(us) 00:07:52.795 [2024-11-20T08:46:26.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.795 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:52.795 Nvme1n1 : 1.01 12932.94 50.52 0.00 0.00 9861.36 5867.03 15603.81 00:07:52.795 [2024-11-20T08:46:26.377Z] =================================================================================================================== 00:07:52.795 [2024-11-20T08:46:26.377Z] Total : 12932.94 50.52 0.00 0.00 9861.36 5867.03 15603.81 00:07:52.795 254696.00 IOPS, 994.91 MiB/s [2024-11-20T08:46:26.377Z] 10192.00 IOPS, 39.81 MiB/s 00:07:52.795 Latency(us) 00:07:52.795 [2024-11-20T08:46:26.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.795 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:52.795 Nvme1n1 : 1.00 254313.12 993.41 0.00 0.00 500.88 223.33 1490.16 00:07:52.795 [2024-11-20T08:46:26.377Z] =================================================================================================================== 00:07:52.795 [2024-11-20T08:46:26.377Z] Total : 254313.12 993.41 0.00 0.00 500.88 223.33 1490.16 00:07:52.795 00:07:52.795 Latency(us) 00:07:52.795 [2024-11-20T08:46:26.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.795 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:52.795 Nvme1n1 : 1.01 10262.33 40.09 0.00 0.00 12432.08 4805.97 21595.67 00:07:52.795 [2024-11-20T08:46:26.377Z] =================================================================================================================== 00:07:52.795 [2024-11-20T08:46:26.377Z] Total : 10262.33 40.09 0.00 0.00 12432.08 4805.97 21595.67 00:07:52.795 11208.00 IOPS, 43.78 MiB/s 00:07:52.795 Latency(us) 00:07:52.795 [2024-11-20T08:46:26.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.795 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:52.795 Nvme1n1 : 1.00 11290.81 44.10 0.00 0.00 11308.60 3229.99 21470.84 00:07:52.795 [2024-11-20T08:46:26.377Z] =================================================================================================================== 00:07:52.795 [2024-11-20T08:46:26.377Z] Total : 11290.81 44.10 0.00 0.00 11308.60 3229.99 21470.84 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2516044 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2516046 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2516049 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.053 rmmod nvme_tcp 00:07:53.053 rmmod nvme_fabrics 00:07:53.053 rmmod nvme_keyring 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2515789 ']' 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2515789 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2515789 ']' 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2515789 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515789 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515789' 00:07:53.053 killing process with pid 2515789 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2515789 00:07:53.053 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2515789 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.312 09:46:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.218 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.477 00:07:55.477 real 0m11.517s 00:07:55.477 user 0m18.952s 00:07:55.477 sys 0m6.347s 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.477 ************************************ 00:07:55.477 END TEST nvmf_bdev_io_wait 00:07:55.477 ************************************ 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.477 ************************************ 00:07:55.477 START TEST nvmf_queue_depth 00:07:55.477 ************************************ 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.477 * Looking for test storage... 00:07:55.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.477 09:46:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.477 --rc genhtml_branch_coverage=1 00:07:55.477 --rc genhtml_function_coverage=1 00:07:55.477 --rc genhtml_legend=1 00:07:55.477 --rc geninfo_all_blocks=1 00:07:55.477 --rc geninfo_unexecuted_blocks=1 00:07:55.477 00:07:55.477 ' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.477 --rc genhtml_branch_coverage=1 00:07:55.477 --rc genhtml_function_coverage=1 00:07:55.477 --rc genhtml_legend=1 00:07:55.477 --rc geninfo_all_blocks=1 00:07:55.477 --rc geninfo_unexecuted_blocks=1 00:07:55.477 00:07:55.477 ' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.477 --rc genhtml_branch_coverage=1 00:07:55.477 --rc genhtml_function_coverage=1 00:07:55.477 --rc genhtml_legend=1 00:07:55.477 --rc geninfo_all_blocks=1 00:07:55.477 --rc geninfo_unexecuted_blocks=1 00:07:55.477 00:07:55.477 ' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.477 --rc genhtml_branch_coverage=1 00:07:55.477 --rc genhtml_function_coverage=1 00:07:55.477 --rc genhtml_legend=1 00:07:55.477 --rc geninfo_all_blocks=1 00:07:55.477 --rc geninfo_unexecuted_blocks=1 00:07:55.477 00:07:55.477 ' 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.477 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.739 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.740 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.741 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.742 09:46:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.336 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.336 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.336 09:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:08:02.337 00:08:02.337 --- 10.0.0.2 ping statistics --- 00:08:02.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.337 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:02.337 00:08:02.337 --- 10.0.0.1 ping statistics --- 00:08:02.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.337 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2520035 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2520035 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2520035 ']' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 [2024-11-20 09:46:35.183156] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:08:02.337 [2024-11-20 09:46:35.183222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.337 [2024-11-20 09:46:35.264579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.337 [2024-11-20 09:46:35.305230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.337 [2024-11-20 09:46:35.305264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.337 [2024-11-20 09:46:35.305271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.337 [2024-11-20 09:46:35.305277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.337 [2024-11-20 09:46:35.305282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.337 [2024-11-20 09:46:35.305834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 [2024-11-20 09:46:35.440079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 Malloc0 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 [2024-11-20 09:46:35.490136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2520076 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2520076 /var/tmp/bdevperf.sock 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2520076 ']' 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.337 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.337 [2024-11-20 09:46:35.541427] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:08:02.337 [2024-11-20 09:46:35.541466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520076 ] 00:08:02.337 [2024-11-20 09:46:35.615989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.337 [2024-11-20 09:46:35.656517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.338 NVMe0n1 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.338 09:46:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:02.595 Running I/O for 10 seconds... 00:08:04.460 12091.00 IOPS, 47.23 MiB/s [2024-11-20T08:46:39.419Z] 12288.00 IOPS, 48.00 MiB/s [2024-11-20T08:46:39.994Z] 12292.33 IOPS, 48.02 MiB/s [2024-11-20T08:46:41.368Z] 12336.50 IOPS, 48.19 MiB/s [2024-11-20T08:46:42.302Z] 12287.80 IOPS, 48.00 MiB/s [2024-11-20T08:46:43.235Z] 12428.83 IOPS, 48.55 MiB/s [2024-11-20T08:46:44.168Z] 12419.43 IOPS, 48.51 MiB/s [2024-11-20T08:46:45.102Z] 12431.25 IOPS, 48.56 MiB/s [2024-11-20T08:46:46.034Z] 12475.00 IOPS, 48.73 MiB/s [2024-11-20T08:46:46.293Z] 12479.20 IOPS, 48.75 MiB/s 00:08:12.711 Latency(us) 00:08:12.711 [2024-11-20T08:46:46.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.711 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:12.711 Verification LBA range: start 0x0 length 0x4000 00:08:12.711 NVMe0n1 : 10.06 12505.75 48.85 0.00 0.00 81634.36 18849.40 52428.80 00:08:12.711 [2024-11-20T08:46:46.293Z] =================================================================================================================== 00:08:12.711 [2024-11-20T08:46:46.293Z] Total : 12505.75 48.85 0.00 0.00 81634.36 18849.40 52428.80 00:08:12.711 { 00:08:12.711 "results": [ 00:08:12.711 { 00:08:12.711 "job": "NVMe0n1", 00:08:12.711 "core_mask": "0x1", 00:08:12.711 "workload": "verify", 00:08:12.711 "status": "finished", 00:08:12.711 "verify_range": { 00:08:12.711 "start": 0, 00:08:12.711 "length": 16384 00:08:12.711 }, 00:08:12.711 "queue_depth": 1024, 00:08:12.711 "io_size": 4096, 00:08:12.711 "runtime": 10.059532, 00:08:12.711 "iops": 12505.75076454849, 00:08:12.711 "mibps": 48.850588924017536, 00:08:12.711 "io_failed": 0, 00:08:12.711 "io_timeout": 0, 00:08:12.711 "avg_latency_us": 81634.3625435889, 00:08:12.711 "min_latency_us": 18849.401904761904, 00:08:12.711 "max_latency_us": 52428.8 00:08:12.711 } 00:08:12.711 ], 00:08:12.711 "core_count": 1 00:08:12.711 } 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2520076 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2520076 ']' 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2520076 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520076 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520076' 00:08:12.711 killing process with pid 2520076 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2520076 00:08:12.711 Received shutdown signal, test time was about 10.000000 seconds 00:08:12.711 00:08:12.711 Latency(us) 00:08:12.711 [2024-11-20T08:46:46.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.711 [2024-11-20T08:46:46.293Z] =================================================================================================================== 00:08:12.711 [2024-11-20T08:46:46.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2520076 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.711 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.970 rmmod nvme_tcp 00:08:12.970 rmmod nvme_fabrics 00:08:12.970 rmmod nvme_keyring 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2520035 ']' 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2520035 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2520035 ']' 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2520035 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520035 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520035' 00:08:12.970 killing process with pid 2520035 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2520035 00:08:12.970 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2520035 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.229 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.230 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.230 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.230 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.230 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.230 09:46:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.134 00:08:15.134 real 0m19.776s 00:08:15.134 user 0m23.072s 00:08:15.134 sys 0m6.095s 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.134 ************************************ 00:08:15.134 END TEST nvmf_queue_depth 00:08:15.134 ************************************ 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.134 09:46:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.394 ************************************ 00:08:15.394 START TEST nvmf_target_multipath 00:08:15.394 ************************************ 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.394 * Looking for test storage... 00:08:15.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.394 --rc genhtml_branch_coverage=1 00:08:15.394 --rc genhtml_function_coverage=1 00:08:15.394 --rc genhtml_legend=1 00:08:15.394 --rc geninfo_all_blocks=1 00:08:15.394 --rc geninfo_unexecuted_blocks=1 00:08:15.394 00:08:15.394 ' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.394 --rc genhtml_branch_coverage=1 00:08:15.394 --rc genhtml_function_coverage=1 00:08:15.394 --rc genhtml_legend=1 00:08:15.394 --rc geninfo_all_blocks=1 00:08:15.394 --rc geninfo_unexecuted_blocks=1 00:08:15.394 00:08:15.394 ' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.394 --rc genhtml_branch_coverage=1 00:08:15.394 --rc genhtml_function_coverage=1 00:08:15.394 --rc genhtml_legend=1 00:08:15.394 --rc geninfo_all_blocks=1 00:08:15.394 --rc geninfo_unexecuted_blocks=1 00:08:15.394 00:08:15.394 ' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.394 --rc genhtml_branch_coverage=1 00:08:15.394 --rc genhtml_function_coverage=1 00:08:15.394 --rc genhtml_legend=1 00:08:15.394 --rc geninfo_all_blocks=1 00:08:15.394 --rc geninfo_unexecuted_blocks=1 00:08:15.394 00:08:15.394 ' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.394 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.395 09:46:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.966 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:21.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:21.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:21.967 Found net devices under 0000:86:00.0: cvl_0_0 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:21.967 Found net devices under 0000:86:00.1: cvl_0_1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:08:21.967 00:08:21.967 --- 10.0.0.2 ping statistics --- 00:08:21.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.967 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:21.967 00:08:21.967 --- 10.0.0.1 ping statistics --- 00:08:21.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.967 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.967 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:21.968 only one NIC for nvmf test 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.968 rmmod nvme_tcp 00:08:21.968 rmmod nvme_fabrics 00:08:21.968 rmmod nvme_keyring 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.968 09:46:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.875 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.876 00:08:23.876 real 0m8.374s 00:08:23.876 user 0m1.808s 00:08:23.876 sys 0m4.579s 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:23.876 ************************************ 00:08:23.876 END TEST nvmf_target_multipath 00:08:23.876 ************************************ 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.876 ************************************ 00:08:23.876 START TEST nvmf_zcopy 00:08:23.876 ************************************ 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:23.876 * Looking for test storage... 00:08:23.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.876 --rc genhtml_branch_coverage=1 00:08:23.876 --rc genhtml_function_coverage=1 00:08:23.876 --rc genhtml_legend=1 00:08:23.876 --rc geninfo_all_blocks=1 00:08:23.876 --rc geninfo_unexecuted_blocks=1 00:08:23.876 00:08:23.876 ' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.876 --rc genhtml_branch_coverage=1 00:08:23.876 --rc genhtml_function_coverage=1 00:08:23.876 --rc genhtml_legend=1 00:08:23.876 --rc geninfo_all_blocks=1 00:08:23.876 --rc geninfo_unexecuted_blocks=1 00:08:23.876 00:08:23.876 ' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.876 --rc genhtml_branch_coverage=1 00:08:23.876 --rc genhtml_function_coverage=1 00:08:23.876 --rc genhtml_legend=1 00:08:23.876 --rc geninfo_all_blocks=1 00:08:23.876 --rc geninfo_unexecuted_blocks=1 00:08:23.876 00:08:23.876 ' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.876 --rc genhtml_branch_coverage=1 00:08:23.876 --rc genhtml_function_coverage=1 00:08:23.876 --rc genhtml_legend=1 00:08:23.876 --rc geninfo_all_blocks=1 00:08:23.876 --rc geninfo_unexecuted_blocks=1 00:08:23.876 00:08:23.876 ' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:23.876 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.877 09:46:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.444 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.445 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.445 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:08:30.445 00:08:30.445 --- 10.0.0.2 ping statistics --- 00:08:30.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.445 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:30.445 00:08:30.445 --- 10.0.0.1 ping statistics --- 00:08:30.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.445 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2528973 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2528973 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2528973 ']' 00:08:30.445 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 [2024-11-20 09:47:03.478327] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:08:30.446 [2024-11-20 09:47:03.478383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.446 [2024-11-20 09:47:03.560634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.446 [2024-11-20 09:47:03.599308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.446 [2024-11-20 09:47:03.599344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.446 [2024-11-20 09:47:03.599350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.446 [2024-11-20 09:47:03.599357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.446 [2024-11-20 09:47:03.599361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.446 [2024-11-20 09:47:03.599904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 [2024-11-20 09:47:03.746370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 [2024-11-20 09:47:03.766572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 malloc0 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.446 { 00:08:30.446 "params": { 00:08:30.446 "name": "Nvme$subsystem", 00:08:30.446 "trtype": "$TEST_TRANSPORT", 00:08:30.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.446 "adrfam": "ipv4", 00:08:30.446 "trsvcid": "$NVMF_PORT", 00:08:30.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.446 "hdgst": ${hdgst:-false}, 00:08:30.446 "ddgst": ${ddgst:-false} 00:08:30.446 }, 00:08:30.446 "method": "bdev_nvme_attach_controller" 00:08:30.446 } 00:08:30.446 EOF 00:08:30.446 )") 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.446 09:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.446 "params": { 00:08:30.446 "name": "Nvme1", 00:08:30.446 "trtype": "tcp", 00:08:30.446 "traddr": "10.0.0.2", 00:08:30.446 "adrfam": "ipv4", 00:08:30.446 "trsvcid": "4420", 00:08:30.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.446 "hdgst": false, 00:08:30.446 "ddgst": false 00:08:30.446 }, 00:08:30.446 "method": "bdev_nvme_attach_controller" 00:08:30.446 }' 00:08:30.446 [2024-11-20 09:47:03.851947] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:08:30.446 [2024-11-20 09:47:03.851987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529005 ] 00:08:30.446 [2024-11-20 09:47:03.925831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.446 [2024-11-20 09:47:03.966124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.705 Running I/O for 10 seconds... 00:08:33.012 8681.00 IOPS, 67.82 MiB/s [2024-11-20T08:47:07.528Z] 8766.50 IOPS, 68.49 MiB/s [2024-11-20T08:47:08.497Z] 8788.33 IOPS, 68.66 MiB/s [2024-11-20T08:47:09.493Z] 8800.00 IOPS, 68.75 MiB/s [2024-11-20T08:47:10.429Z] 8804.40 IOPS, 68.78 MiB/s [2024-11-20T08:47:11.363Z] 8795.00 IOPS, 68.71 MiB/s [2024-11-20T08:47:12.298Z] 8791.14 IOPS, 68.68 MiB/s [2024-11-20T08:47:13.232Z] 8796.12 IOPS, 68.72 MiB/s [2024-11-20T08:47:14.605Z] 8803.33 IOPS, 68.78 MiB/s [2024-11-20T08:47:14.605Z] 8804.30 IOPS, 68.78 MiB/s 00:08:41.023 Latency(us) 00:08:41.023 [2024-11-20T08:47:14.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:41.023 Verification LBA range: start 0x0 length 0x1000 00:08:41.023 Nvme1n1 : 10.01 8807.40 68.81 0.00 0.00 14492.27 1903.66 22469.49 00:08:41.023 [2024-11-20T08:47:14.605Z] =================================================================================================================== 00:08:41.023 [2024-11-20T08:47:14.605Z] Total : 8807.40 68.81 0.00 0.00 14492.27 1903.66 22469.49 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2530815 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.023 { 00:08:41.023 "params": { 00:08:41.023 "name": "Nvme$subsystem", 00:08:41.023 "trtype": "$TEST_TRANSPORT", 00:08:41.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.023 "adrfam": "ipv4", 00:08:41.023 "trsvcid": "$NVMF_PORT", 00:08:41.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.023 "hdgst": ${hdgst:-false}, 00:08:41.023 "ddgst": ${ddgst:-false} 00:08:41.023 }, 00:08:41.023 "method": "bdev_nvme_attach_controller" 00:08:41.023 } 00:08:41.023 EOF 00:08:41.023 )") 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.023 [2024-11-20 09:47:14.364135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.364170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.023 09:47:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.023 "params": { 00:08:41.023 "name": "Nvme1", 00:08:41.023 "trtype": "tcp", 00:08:41.023 "traddr": "10.0.0.2", 00:08:41.023 "adrfam": "ipv4", 00:08:41.023 "trsvcid": "4420", 00:08:41.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.023 "hdgst": false, 00:08:41.023 "ddgst": false 00:08:41.023 }, 00:08:41.023 "method": "bdev_nvme_attach_controller" 00:08:41.023 }' 00:08:41.023 [2024-11-20 09:47:14.376133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.376146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.388159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.388171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.400192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.400210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.402701] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:08:41.023 [2024-11-20 09:47:14.402743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530815 ] 00:08:41.023 [2024-11-20 09:47:14.412241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.412253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.424270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.424282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.436285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.436296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.448316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.448326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.460348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.460358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.472379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.472389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.476800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.023 [2024-11-20 09:47:14.484410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.484424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.496444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.496458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.508500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.023 [2024-11-20 09:47:14.508517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.023 [2024-11-20 09:47:14.517663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.024 [2024-11-20 09:47:14.520522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.520533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.532557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.532574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.544584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.544605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.556612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.556627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.568644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.568657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.580676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.580690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.024 [2024-11-20 09:47:14.592706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.024 [2024-11-20 09:47:14.592717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.604759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.604781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.616796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.616816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.628816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.628835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.640839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.640852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.652872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.652882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.664903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.664913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.676944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.676960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.688979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.688995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.701004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.701015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.745475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.745495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.757155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.757168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 Running I/O for 5 seconds... 00:08:41.282 [2024-11-20 09:47:14.767327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.767348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.776752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.776772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.786228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.786248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.795476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.795496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.809915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.809937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.818742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.818761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.827487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.827507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.836788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.836808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.282 [2024-11-20 09:47:14.846164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.282 [2024-11-20 09:47:14.846184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.860632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.860652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.874392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.874411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.883314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.883335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.891835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.891855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.901078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.901097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.915174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.915194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.928989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.929009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.942719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.942738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.951582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.951601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.960838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.960857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.974984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.975003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.983760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.983781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:14.992978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:14.992998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.002704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.002723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.011891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.011910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.026236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.026256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.033825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.033844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.042691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.042711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.056790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.056809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.070601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.070621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.084483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.084503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.098251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.098271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.540 [2024-11-20 09:47:15.107152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.540 [2024-11-20 09:47:15.107172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.121249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.121269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.130024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.130048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.144224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.144260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.158240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.158261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.166973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.166992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.176350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.176369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.186183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.186208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.200499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.200519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.209447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.209467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.217955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.217973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.232170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.232189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.240033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.240052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.253588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.253612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.262149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.262168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.271307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.271326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.280404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.280424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.289614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.289634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.303853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.303876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.317607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.317629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.327084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.798 [2024-11-20 09:47:15.327103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.798 [2024-11-20 09:47:15.335931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.799 [2024-11-20 09:47:15.335954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.799 [2024-11-20 09:47:15.344969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.799 [2024-11-20 09:47:15.344988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.799 [2024-11-20 09:47:15.359173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.799 [2024-11-20 09:47:15.359193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.799 [2024-11-20 09:47:15.367737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.799 [2024-11-20 09:47:15.367756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.376993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.377012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.386054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.386072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.395358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.395378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.409839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.409861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.418446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.418466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.427475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.427496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.436584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.436605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.445711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.445731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.460469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.460490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.469523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.469543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.478181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.478200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.487552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.487571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.496307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.496326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.510961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.510980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.521756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.521775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.530311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.530339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.539557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.539577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.548005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.548025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.562221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.562257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.575708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.575728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.584666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.584685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.593903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.593923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.607929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.607949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.621241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.621261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.057 [2024-11-20 09:47:15.635144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.057 [2024-11-20 09:47:15.635165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.315 [2024-11-20 09:47:15.648672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.315 [2024-11-20 09:47:15.648692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.315 [2024-11-20 09:47:15.657704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.657725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.672241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.672261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.686069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.686090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.694815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.694835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.708768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.708787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.722188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.722214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.736062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.736083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.749846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.749868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.758879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.758904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 16824.00 IOPS, 131.44 MiB/s [2024-11-20T08:47:15.898Z] [2024-11-20 09:47:15.767944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.767964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.776960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.776979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.786723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.786743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.801064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.801084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.814465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.814485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.823710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.823729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.832996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.833015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.842804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.842824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.856883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.856903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.865716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.865735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.874915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.874934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.884314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.884333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.316 [2024-11-20 09:47:15.893662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.316 [2024-11-20 09:47:15.893682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.907815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.907835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.916465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.916485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.931069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.931091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.942165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.942186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.951290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.951309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.965250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.965270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.979090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.979110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.987839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.987858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:15.996719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:15.996738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.006050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.006070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.015831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.015855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.024660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.024679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.033397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.033416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.042554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.042573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.051177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.051196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.065684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.065704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.078681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.078701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.093041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.093060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.100325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.100343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.109152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.109171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.123255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.123274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.132119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.132140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.141074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.141094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.575 [2024-11-20 09:47:16.150294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.575 [2024-11-20 09:47:16.150313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.164969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.164988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.178775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.178794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.187421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.187440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.196506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.196524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.211239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.211258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.222572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.222592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.236606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.236626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.245188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.833 [2024-11-20 09:47:16.245216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.833 [2024-11-20 09:47:16.254685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.254705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.263114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.263134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.272246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.272266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.286494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.286513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.295014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.295033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.308672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.308692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.317637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.317656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.331682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.331702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.345847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.345867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.354507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.354527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.363504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.363524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.377741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.377760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.391170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.391190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.834 [2024-11-20 09:47:16.404415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.834 [2024-11-20 09:47:16.404435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.418094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.418113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.426930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.426950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.436100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.436120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.450149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.450169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.463546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.463565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.472081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.472101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.481255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.481274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.490276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.490295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.504194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.504221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.517985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.518005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.525746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.525766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.535197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.535225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.544519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.544538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.558577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.558597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.571984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.572004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.580816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.580839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.589908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.589928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.598935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.598954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.608007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.608026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.617132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.617152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.631515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.631535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.640138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.640157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.654498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.654519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.092 [2024-11-20 09:47:16.668028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.092 [2024-11-20 09:47:16.668048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.350 [2024-11-20 09:47:16.681901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.681923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.690769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.690790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.704537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.704556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.713090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.713108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.727239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.727259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.740936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.740955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.754490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.754510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 16986.00 IOPS, 132.70 MiB/s [2024-11-20T08:47:16.933Z] [2024-11-20 09:47:16.768302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.768321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.782070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.782090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.795957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.795978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.809333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.809358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.823064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.823085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.836348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.836370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.849868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.849892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.864062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.864083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.877884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.877904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.891177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.891196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.905293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.905314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.351 [2024-11-20 09:47:16.916432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.351 [2024-11-20 09:47:16.916451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.930928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.930948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.944652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.944672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.958781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.958803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.972968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.972988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.984305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.984324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:16.998577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:16.998597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.012515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.012536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.023625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.023646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.037930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.037951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.051722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.051742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.065819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.065844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.079533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.079553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.092954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.092974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.106425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.106444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.119657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.119678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.133566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.133588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.147067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.147086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.160635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.160654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.175078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.175097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.610 [2024-11-20 09:47:17.186114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.610 [2024-11-20 09:47:17.186134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.868 [2024-11-20 09:47:17.200821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.868 [2024-11-20 09:47:17.200841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.868 [2024-11-20 09:47:17.214241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.868 [2024-11-20 09:47:17.214260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.868 [2024-11-20 09:47:17.227291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.868 [2024-11-20 09:47:17.227311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.868 [2024-11-20 09:47:17.241212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.868 [2024-11-20 09:47:17.241230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.868 [2024-11-20 09:47:17.255108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.255127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.268710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.268728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.282417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.282437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.295890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.295910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.309569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.309588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.323557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.323576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.336943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.336963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.350811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.350830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.364048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.364067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.377702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.377721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.391343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.391363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.404668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.404687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.418388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.418407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.432074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.432093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.869 [2024-11-20 09:47:17.446115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.869 [2024-11-20 09:47:17.446135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.459666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.459686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.473328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.473347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.487253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.487273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.501009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.501029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.514624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.514644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.528534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.528554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.542194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.542219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.555783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.555808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.569520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.569539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.582880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.582900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.596695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.596715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.610196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.610221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.623410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.623430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.637624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.637643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.648783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.648802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.662925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.662944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.676125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.676144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.690118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.690138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.126 [2024-11-20 09:47:17.703958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.126 [2024-11-20 09:47:17.703978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.717786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.717806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.732167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.732186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.743492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.743512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.757820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.757839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 17025.33 IOPS, 133.01 MiB/s [2024-11-20T08:47:17.966Z] [2024-11-20 09:47:17.771482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.771502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.785566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.785587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.799240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.799259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.812938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.812959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.826378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.826397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.839641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.839661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.853705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.853725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.867502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.384 [2024-11-20 09:47:17.867521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.384 [2024-11-20 09:47:17.881435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.881454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.385 [2024-11-20 09:47:17.895394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.895414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.385 [2024-11-20 09:47:17.909215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.909235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.385 [2024-11-20 09:47:17.922683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.922703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.385 [2024-11-20 09:47:17.936429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.936448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.385 [2024-11-20 09:47:17.950249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.385 [2024-11-20 09:47:17.950269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:17.964032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:17.964052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:17.977726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:17.977747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:17.992028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:17.992050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.002813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.002832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.016655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.016675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.030334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.030354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.043840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.043859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.057695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.057714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.071305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.071325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.084677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.084702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.098956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.098975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.114575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.114595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.128448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.128467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.141674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.141693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.154698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.154717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.168265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.168283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.181976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.181996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.195608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.195629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.643 [2024-11-20 09:47:18.209313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.643 [2024-11-20 09:47:18.209333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.223289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.223311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.237242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.237264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.251369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.251389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.262225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.262246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.271592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.271612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.285870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.285890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.299030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.299050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.312958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.312978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.326564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.326583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.335976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.335999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.350283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.350303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.364273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.364294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.377832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.377851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.391569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.391588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.405080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.405100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.414425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.414446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.428620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.428640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.442313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.442332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.451626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.451646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.465981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.466000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.902 [2024-11-20 09:47:18.480101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.902 [2024-11-20 09:47:18.480121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.494021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.494042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.505553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.505575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.514889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.514911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.529147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.529167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.542756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.542777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.556483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.556503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.569938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.569957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.583503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.583529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.596966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.596985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.610540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.610559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.624662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.624682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.638129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.638150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.651966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.651985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.665555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.665574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.679251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.679270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.692669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.692688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.705997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.706016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.719665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.719684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.161 [2024-11-20 09:47:18.733273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.161 [2024-11-20 09:47:18.733292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.747128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.747147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.760436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.760456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 17059.75 IOPS, 133.28 MiB/s [2024-11-20T08:47:19.002Z] [2024-11-20 09:47:18.774074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.774093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.787904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.787924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.801325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.801345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.814893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.814913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.828523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.828542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.842152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.842172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.855686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.855706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.869146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.869166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.883602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.883622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.897252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.897271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.910940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.910959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.924856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.924876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.938200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.938224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.951690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.951709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.965283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.965301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.420 [2024-11-20 09:47:18.979056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.420 [2024-11-20 09:47:18.979075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.421 [2024-11-20 09:47:18.992363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.421 [2024-11-20 09:47:18.992382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.006230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.006250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.019428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.019450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.033118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.033138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.046359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.046380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.059929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.059948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.073752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.073771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.087165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.087184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.100800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.100821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.114157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.114176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.127795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.127815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.141822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.141842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.154902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.154921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.168760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.168780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.182714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.182734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.193338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.193357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.202692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.202711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.216485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.216505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.229686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.229705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.243346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.243365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.680 [2024-11-20 09:47:19.257193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.680 [2024-11-20 09:47:19.257220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.271228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.271248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.281968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.281988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.295953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.295972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.309597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.309617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.323417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.323437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.337082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.337101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.351056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.351076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.364767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.364787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.378340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.378360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.392420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.392440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.405967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.405986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.419332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.419355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.433054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.433073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.446467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.446486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.460407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.460427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.474380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.474399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.485235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.485270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.499335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.499354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.939 [2024-11-20 09:47:19.513118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.939 [2024-11-20 09:47:19.513138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.526949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.526968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.540890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.540909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.554586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.554605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.567952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.567971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.581660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.581680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.595258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.595284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.609162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.609183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.622767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.622789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.636620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.636641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.650503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.650525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.664145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.664166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.677644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.677665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.691334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.691354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.705064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.705084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.718392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.718412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.732171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.732191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.746049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.746069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 [2024-11-20 09:47:19.759368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.759387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.198 17081.40 IOPS, 133.45 MiB/s [2024-11-20T08:47:19.780Z] [2024-11-20 09:47:19.773113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.198 [2024-11-20 09:47:19.773134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 00:08:46.461 Latency(us) 00:08:46.461 [2024-11-20T08:47:20.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.461 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.461 Nvme1n1 : 5.01 17086.65 133.49 0.00 0.00 7484.43 3386.03 15978.30 00:08:46.461 [2024-11-20T08:47:20.043Z] =================================================================================================================== 00:08:46.461 [2024-11-20T08:47:20.043Z] Total : 17086.65 133.49 0.00 0.00 7484.43 3386.03 15978.30 00:08:46.461 [2024-11-20 09:47:19.782141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.782160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.794171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.794188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.806218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.806243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.818244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.818263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.830271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.830287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.842302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.842317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.854335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.854350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.866364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.866378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.878404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.878419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.890434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.890449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.902470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.902485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.914502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.914518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.926527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.926539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 [2024-11-20 09:47:19.938558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.461 [2024-11-20 09:47:19.938570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2530815) - No such process 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2530815 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.461 delay0 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.461 09:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:46.719 [2024-11-20 09:47:20.087899] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:54.828 Initializing NVMe Controllers 00:08:54.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:54.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:54.828 Initialization complete. Launching workers. 00:08:54.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5819 00:08:54.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6092, failed to submit 47 00:08:54.828 success 5918, unsuccessful 174, failed 0 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.828 rmmod nvme_tcp 00:08:54.828 rmmod nvme_fabrics 00:08:54.828 rmmod nvme_keyring 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2528973 ']' 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2528973 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2528973 ']' 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2528973 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.828 09:47:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528973 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528973' 00:08:54.828 killing process with pid 2528973 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2528973 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2528973 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.828 09:47:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.765 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:55.766 00:08:55.766 real 0m32.102s 00:08:55.766 user 0m42.756s 00:08:55.766 sys 0m11.635s 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.766 ************************************ 00:08:55.766 END TEST nvmf_zcopy 00:08:55.766 ************************************ 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.766 09:47:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.025 ************************************ 00:08:56.025 START TEST nvmf_nmic 00:08:56.025 ************************************ 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:56.025 * Looking for test storage... 00:08:56.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.025 --rc genhtml_branch_coverage=1 00:08:56.025 --rc genhtml_function_coverage=1 00:08:56.025 --rc genhtml_legend=1 00:08:56.025 --rc geninfo_all_blocks=1 00:08:56.025 --rc geninfo_unexecuted_blocks=1 00:08:56.025 00:08:56.025 ' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.025 --rc genhtml_branch_coverage=1 00:08:56.025 --rc genhtml_function_coverage=1 00:08:56.025 --rc genhtml_legend=1 00:08:56.025 --rc geninfo_all_blocks=1 00:08:56.025 --rc geninfo_unexecuted_blocks=1 00:08:56.025 00:08:56.025 ' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.025 --rc genhtml_branch_coverage=1 00:08:56.025 --rc genhtml_function_coverage=1 00:08:56.025 --rc genhtml_legend=1 00:08:56.025 --rc geninfo_all_blocks=1 00:08:56.025 --rc geninfo_unexecuted_blocks=1 00:08:56.025 00:08:56.025 ' 00:08:56.025 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.025 --rc genhtml_branch_coverage=1 00:08:56.025 --rc genhtml_function_coverage=1 00:08:56.025 --rc genhtml_legend=1 00:08:56.025 --rc geninfo_all_blocks=1 00:08:56.025 --rc geninfo_unexecuted_blocks=1 00:08:56.025 00:08:56.025 ' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.026 09:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:02.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:02.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:02.598 Found net devices under 0000:86:00.0: cvl_0_0 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:02.598 Found net devices under 0000:86:00.1: cvl_0_1 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.598 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:09:02.599 00:09:02.599 --- 10.0.0.2 ping statistics --- 00:09:02.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.599 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:02.599 00:09:02.599 --- 10.0.0.1 ping statistics --- 00:09:02.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.599 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2536446 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2536446 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2536446 ']' 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 [2024-11-20 09:47:35.588430] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:09:02.599 [2024-11-20 09:47:35.588472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.599 [2024-11-20 09:47:35.665487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.599 [2024-11-20 09:47:35.709196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.599 [2024-11-20 09:47:35.709256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.599 [2024-11-20 09:47:35.709264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.599 [2024-11-20 09:47:35.709270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.599 [2024-11-20 09:47:35.709275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.599 [2024-11-20 09:47:35.710795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.599 [2024-11-20 09:47:35.710887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.599 [2024-11-20 09:47:35.710993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.599 [2024-11-20 09:47:35.710994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 [2024-11-20 09:47:35.851544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 Malloc0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.599 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 [2024-11-20 09:47:35.925473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:02.600 test case1: single bdev can't be used in multiple subsystems 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 [2024-11-20 09:47:35.957382] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:02.600 [2024-11-20 09:47:35.957407] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:02.600 [2024-11-20 09:47:35.957414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.600 request: 00:09:02.600 { 00:09:02.600 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:02.600 "namespace": { 00:09:02.600 "bdev_name": "Malloc0", 00:09:02.600 "no_auto_visible": false 00:09:02.600 }, 00:09:02.600 "method": "nvmf_subsystem_add_ns", 00:09:02.600 "req_id": 1 00:09:02.600 } 00:09:02.600 Got JSON-RPC error response 00:09:02.600 response: 00:09:02.600 { 00:09:02.600 "code": -32602, 00:09:02.600 "message": "Invalid parameters" 00:09:02.600 } 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:02.600 Adding namespace failed - expected result. 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:02.600 test case2: host connect to nvmf target in multiple paths 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 [2024-11-20 09:47:35.969542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.600 09:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.970 09:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:04.902 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.902 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:04.902 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.902 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:04.902 09:47:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.794 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.794 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.794 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.070 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:07.070 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.070 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:07.070 09:47:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:07.070 [global] 00:09:07.070 thread=1 00:09:07.070 invalidate=1 00:09:07.070 rw=write 00:09:07.070 time_based=1 00:09:07.070 runtime=1 00:09:07.070 ioengine=libaio 00:09:07.070 direct=1 00:09:07.070 bs=4096 00:09:07.070 iodepth=1 00:09:07.070 norandommap=0 00:09:07.070 numjobs=1 00:09:07.070 00:09:07.070 verify_dump=1 00:09:07.070 verify_backlog=512 00:09:07.070 verify_state_save=0 00:09:07.070 do_verify=1 00:09:07.070 verify=crc32c-intel 00:09:07.070 [job0] 00:09:07.070 filename=/dev/nvme0n1 00:09:07.070 Could not set queue depth (nvme0n1) 00:09:07.331 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.331 fio-3.35 00:09:07.331 Starting 1 thread 00:09:08.701 00:09:08.701 job0: (groupid=0, jobs=1): err= 0: pid=2537330: Wed Nov 20 09:47:41 2024 00:09:08.701 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:09:08.701 slat (nsec): min=8599, max=22708, avg=12238.75, stdev=4064.54 00:09:08.701 clat (usec): min=256, max=41059, avg=39279.35, stdev=8312.02 00:09:08.701 lat (usec): min=267, max=41068, avg=39291.59, stdev=8312.36 00:09:08.701 clat percentiles (usec): 00:09:08.701 | 1.00th=[ 258], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:08.701 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:08.701 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:08.701 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:08.701 | 99.99th=[41157] 00:09:08.701 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:08.701 slat (nsec): min=10004, max=42119, avg=11646.27, stdev=2045.79 00:09:08.701 clat (usec): min=128, max=342, avg=160.08, stdev=13.27 00:09:08.701 lat (usec): min=139, max=384, avg=171.73, stdev=14.20 00:09:08.701 clat percentiles (usec): 00:09:08.701 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:08.701 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:09:08.701 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 176], 00:09:08.701 | 99.00th=[ 196], 99.50th=[ 237], 99.90th=[ 343], 99.95th=[ 343], 00:09:08.701 | 99.99th=[ 343] 00:09:08.701 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:08.701 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:08.701 lat (usec) : 250=95.34%, 500=0.37% 00:09:08.701 lat (msec) : 50=4.29% 00:09:08.701 cpu : usr=0.19%, sys=0.97%, ctx=536, majf=0, minf=1 00:09:08.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.701 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:08.701 00:09:08.701 Run status group 0 (all jobs): 00:09:08.701 READ: bw=92.9KiB/s (95.2kB/s), 92.9KiB/s-92.9KiB/s (95.2kB/s-95.2kB/s), io=96.0KiB (98.3kB), run=1033-1033msec 00:09:08.701 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:09:08.701 00:09:08.701 Disk stats (read/write): 00:09:08.701 nvme0n1: ios=69/512, merge=0/0, ticks=791/79, in_queue=870, util=90.98% 00:09:08.701 09:47:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:08.701 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.701 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:08.701 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:08.701 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.701 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.702 rmmod nvme_tcp 00:09:08.702 rmmod nvme_fabrics 00:09:08.702 rmmod nvme_keyring 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2536446 ']' 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2536446 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2536446 ']' 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2536446 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2536446 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2536446' 00:09:08.702 killing process with pid 2536446 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2536446 00:09:08.702 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2536446 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.961 09:47:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.936 00:09:10.936 real 0m15.103s 00:09:10.936 user 0m33.867s 00:09:10.936 sys 0m5.210s 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.936 ************************************ 00:09:10.936 END TEST nvmf_nmic 00:09:10.936 ************************************ 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.936 09:47:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.196 ************************************ 00:09:11.196 START TEST nvmf_fio_target 00:09:11.196 ************************************ 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:11.196 * Looking for test storage... 00:09:11.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.196 --rc genhtml_branch_coverage=1 00:09:11.196 --rc genhtml_function_coverage=1 00:09:11.196 --rc genhtml_legend=1 00:09:11.196 --rc geninfo_all_blocks=1 00:09:11.196 --rc geninfo_unexecuted_blocks=1 00:09:11.196 00:09:11.196 ' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.196 --rc genhtml_branch_coverage=1 00:09:11.196 --rc genhtml_function_coverage=1 00:09:11.196 --rc genhtml_legend=1 00:09:11.196 --rc geninfo_all_blocks=1 00:09:11.196 --rc geninfo_unexecuted_blocks=1 00:09:11.196 00:09:11.196 ' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.196 --rc genhtml_branch_coverage=1 00:09:11.196 --rc genhtml_function_coverage=1 00:09:11.196 --rc genhtml_legend=1 00:09:11.196 --rc geninfo_all_blocks=1 00:09:11.196 --rc geninfo_unexecuted_blocks=1 00:09:11.196 00:09:11.196 ' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.196 --rc genhtml_branch_coverage=1 00:09:11.196 --rc genhtml_function_coverage=1 00:09:11.196 --rc genhtml_legend=1 00:09:11.196 --rc geninfo_all_blocks=1 00:09:11.196 --rc geninfo_unexecuted_blocks=1 00:09:11.196 00:09:11.196 ' 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.196 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.197 09:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.760 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:17.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:17.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:17.761 Found net devices under 0000:86:00.0: cvl_0_0 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:17.761 Found net devices under 0000:86:00.1: cvl_0_1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:09:17.761 00:09:17.761 --- 10.0.0.2 ping statistics --- 00:09:17.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.761 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:09:17.761 00:09:17.761 --- 10.0.0.1 ping statistics --- 00:09:17.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.761 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.761 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2541219 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2541219 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2541219 ']' 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 [2024-11-20 09:47:50.780787] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:09:17.762 [2024-11-20 09:47:50.780830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.762 [2024-11-20 09:47:50.860212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.762 [2024-11-20 09:47:50.904652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.762 [2024-11-20 09:47:50.904684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.762 [2024-11-20 09:47:50.904691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.762 [2024-11-20 09:47:50.904697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.762 [2024-11-20 09:47:50.904702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.762 [2024-11-20 09:47:50.906148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.762 [2024-11-20 09:47:50.906258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.762 [2024-11-20 09:47:50.906309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.762 [2024-11-20 09:47:50.906310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.762 09:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:17.762 [2024-11-20 09:47:51.203753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.762 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.019 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:18.019 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.276 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:18.276 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.533 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:18.533 09:47:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.533 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:18.533 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:18.790 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.048 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:19.048 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.305 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:19.305 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.563 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:19.563 09:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:19.563 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.821 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:19.821 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.078 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:20.078 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.335 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.335 [2024-11-20 09:47:53.877229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.335 09:47:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:20.592 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:20.849 09:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:22.219 09:47:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:24.114 09:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:24.114 [global] 00:09:24.114 thread=1 00:09:24.114 invalidate=1 00:09:24.114 rw=write 00:09:24.114 time_based=1 00:09:24.114 runtime=1 00:09:24.114 ioengine=libaio 00:09:24.114 direct=1 00:09:24.114 bs=4096 00:09:24.114 iodepth=1 00:09:24.114 norandommap=0 00:09:24.114 numjobs=1 00:09:24.114 00:09:24.114 verify_dump=1 00:09:24.114 verify_backlog=512 00:09:24.114 verify_state_save=0 00:09:24.114 do_verify=1 00:09:24.114 verify=crc32c-intel 00:09:24.114 [job0] 00:09:24.114 filename=/dev/nvme0n1 00:09:24.114 [job1] 00:09:24.114 filename=/dev/nvme0n2 00:09:24.114 [job2] 00:09:24.114 filename=/dev/nvme0n3 00:09:24.114 [job3] 00:09:24.114 filename=/dev/nvme0n4 00:09:24.114 Could not set queue depth (nvme0n1) 00:09:24.114 Could not set queue depth (nvme0n2) 00:09:24.114 Could not set queue depth (nvme0n3) 00:09:24.114 Could not set queue depth (nvme0n4) 00:09:24.371 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.371 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.371 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.371 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.371 fio-3.35 00:09:24.371 Starting 4 threads 00:09:25.769 00:09:25.769 job0: (groupid=0, jobs=1): err= 0: pid=2542639: Wed Nov 20 09:47:59 2024 00:09:25.769 read: IOPS=2165, BW=8662KiB/s (8869kB/s)(8956KiB/1034msec) 00:09:25.769 slat (nsec): min=6439, max=21336, avg=7436.17, stdev=1035.40 00:09:25.769 clat (usec): min=167, max=41056, avg=261.91, stdev=1491.20 00:09:25.769 lat (usec): min=174, max=41068, avg=269.35, stdev=1491.44 00:09:25.769 clat percentiles (usec): 00:09:25.769 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:09:25.769 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:09:25.769 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 262], 95.00th=[ 269], 00:09:25.769 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[41157], 99.95th=[41157], 00:09:25.769 | 99.99th=[41157] 00:09:25.769 write: IOPS=2475, BW=9903KiB/s (10.1MB/s)(10.0MiB/1034msec); 0 zone resets 00:09:25.769 slat (nsec): min=9211, max=43406, avg=10520.64, stdev=1865.58 00:09:25.769 clat (usec): min=117, max=389, avg=153.47, stdev=20.63 00:09:25.769 lat (usec): min=127, max=431, avg=163.99, stdev=21.07 00:09:25.769 clat percentiles (usec): 00:09:25.770 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:09:25.770 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:09:25.770 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 192], 00:09:25.770 | 99.00th=[ 206], 99.50th=[ 249], 99.90th=[ 306], 99.95th=[ 371], 00:09:25.770 | 99.99th=[ 392] 00:09:25.770 bw ( KiB/s): min= 8192, max=12288, per=43.08%, avg=10240.00, stdev=2896.31, samples=2 00:09:25.770 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:09:25.770 lat (usec) : 250=92.89%, 500=7.04% 00:09:25.770 lat (msec) : 50=0.06% 00:09:25.770 cpu : usr=2.42%, sys=4.26%, ctx=4799, majf=0, minf=1 00:09:25.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.770 issued rwts: total=2239,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.770 job1: (groupid=0, jobs=1): err= 0: pid=2542647: Wed Nov 20 09:47:59 2024 00:09:25.770 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:09:25.770 slat (nsec): min=8333, max=25678, avg=17996.96, stdev=6434.66 00:09:25.770 clat (usec): min=211, max=42297, avg=39732.22, stdev=8629.30 00:09:25.770 lat (usec): min=234, max=42307, avg=39750.22, stdev=8628.16 00:09:25.770 clat percentiles (usec): 00:09:25.770 | 1.00th=[ 212], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:25.770 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:25.770 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:25.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:25.770 | 99.99th=[42206] 00:09:25.770 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:25.770 slat (nsec): min=9763, max=42659, avg=11206.78, stdev=2428.42 00:09:25.770 clat (usec): min=141, max=259, avg=188.44, stdev=15.66 00:09:25.770 lat (usec): min=153, max=293, avg=199.65, stdev=15.90 00:09:25.770 clat percentiles (usec): 00:09:25.770 | 1.00th=[ 147], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:09:25.770 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:09:25.770 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:09:25.770 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 260], 99.95th=[ 260], 00:09:25.770 | 99.99th=[ 260] 00:09:25.770 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.770 lat (usec) : 250=95.51%, 500=0.37% 00:09:25.770 lat (msec) : 50=4.11% 00:09:25.770 cpu : usr=0.29%, sys=0.49%, ctx=537, majf=0, minf=1 00:09:25.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.770 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.770 job2: (groupid=0, jobs=1): err= 0: pid=2542648: Wed Nov 20 09:47:59 2024 00:09:25.770 read: IOPS=315, BW=1263KiB/s (1293kB/s)(1264KiB/1001msec) 00:09:25.770 slat (nsec): min=8460, max=26224, avg=10175.67, stdev=3534.92 00:09:25.771 clat (usec): min=197, max=41058, avg=2818.88, stdev=9931.96 00:09:25.771 lat (usec): min=206, max=41080, avg=2829.05, stdev=9934.89 00:09:25.771 clat percentiles (usec): 00:09:25.771 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:09:25.771 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:09:25.771 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[41157], 00:09:25.771 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:25.771 | 99.99th=[41157] 00:09:25.771 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:25.771 slat (nsec): min=12146, max=38487, avg=13367.87, stdev=2125.18 00:09:25.771 clat (usec): min=147, max=356, avg=188.41, stdev=19.82 00:09:25.771 lat (usec): min=161, max=369, avg=201.78, stdev=20.28 00:09:25.771 clat percentiles (usec): 00:09:25.771 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:09:25.771 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:25.771 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 219], 00:09:25.771 | 99.00th=[ 262], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 359], 00:09:25.771 | 99.99th=[ 359] 00:09:25.771 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.771 lat (usec) : 250=87.20%, 500=10.39% 00:09:25.771 lat (msec) : 50=2.42% 00:09:25.771 cpu : usr=0.50%, sys=1.80%, ctx=828, majf=0, minf=1 00:09:25.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.771 issued rwts: total=316,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.771 job3: (groupid=0, jobs=1): err= 0: pid=2542649: Wed Nov 20 09:47:59 2024 00:09:25.771 read: IOPS=2528, BW=9.88MiB/s (10.4MB/s)(9.89MiB/1001msec) 00:09:25.771 slat (nsec): min=7197, max=43171, avg=8208.56, stdev=1459.86 00:09:25.771 clat (usec): min=162, max=1303, avg=209.25, stdev=29.15 00:09:25.771 lat (usec): min=170, max=1311, avg=217.46, stdev=29.21 00:09:25.771 clat percentiles (usec): 00:09:25.771 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:25.771 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:25.771 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 249], 00:09:25.771 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 388], 00:09:25.771 | 99.99th=[ 1303] 00:09:25.771 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:25.771 slat (nsec): min=10238, max=43378, avg=11496.94, stdev=1684.24 00:09:25.771 clat (usec): min=119, max=457, avg=158.21, stdev=14.01 00:09:25.771 lat (usec): min=136, max=469, avg=169.71, stdev=14.24 00:09:25.771 clat percentiles (usec): 00:09:25.771 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:25.771 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:09:25.771 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:09:25.771 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 251], 99.95th=[ 255], 00:09:25.772 | 99.99th=[ 457] 00:09:25.772 bw ( KiB/s): min=12288, max=12288, per=51.70%, avg=12288.00, stdev= 0.00, samples=1 00:09:25.772 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:25.772 lat (usec) : 250=97.68%, 500=2.30% 00:09:25.772 lat (msec) : 2=0.02% 00:09:25.772 cpu : usr=4.90%, sys=7.30%, ctx=5091, majf=0, minf=1 00:09:25.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.772 issued rwts: total=2531,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.772 00:09:25.772 Run status group 0 (all jobs): 00:09:25.772 READ: bw=19.3MiB/s (20.2MB/s), 90.1KiB/s-9.88MiB/s (92.3kB/s-10.4MB/s), io=20.0MiB (20.9MB), run=1001-1034msec 00:09:25.772 WRITE: bw=23.2MiB/s (24.3MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1034msec 00:09:25.772 00:09:25.772 Disk stats (read/write): 00:09:25.772 nvme0n1: ios=2098/2417, merge=0/0, ticks=430/360, in_queue=790, util=86.37% 00:09:25.772 nvme0n2: ios=42/512, merge=0/0, ticks=1697/87, in_queue=1784, util=97.66% 00:09:25.772 nvme0n3: ios=18/512, merge=0/0, ticks=738/94, in_queue=832, util=89.02% 00:09:25.772 nvme0n4: ios=2048/2270, merge=0/0, ticks=408/330, in_queue=738, util=89.67% 00:09:25.772 09:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:25.772 [global] 00:09:25.772 thread=1 00:09:25.772 invalidate=1 00:09:25.772 rw=randwrite 00:09:25.772 time_based=1 00:09:25.772 runtime=1 00:09:25.772 ioengine=libaio 00:09:25.772 direct=1 00:09:25.772 bs=4096 00:09:25.772 iodepth=1 00:09:25.772 norandommap=0 00:09:25.772 numjobs=1 00:09:25.772 00:09:25.772 verify_dump=1 00:09:25.772 verify_backlog=512 00:09:25.772 verify_state_save=0 00:09:25.772 do_verify=1 00:09:25.772 verify=crc32c-intel 00:09:25.772 [job0] 00:09:25.772 filename=/dev/nvme0n1 00:09:25.772 [job1] 00:09:25.772 filename=/dev/nvme0n2 00:09:25.772 [job2] 00:09:25.772 filename=/dev/nvme0n3 00:09:25.772 [job3] 00:09:25.772 filename=/dev/nvme0n4 00:09:25.772 Could not set queue depth (nvme0n1) 00:09:25.772 Could not set queue depth (nvme0n2) 00:09:25.772 Could not set queue depth (nvme0n3) 00:09:25.772 Could not set queue depth (nvme0n4) 00:09:26.035 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.035 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.035 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.035 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.035 fio-3.35 00:09:26.035 Starting 4 threads 00:09:27.406 00:09:27.406 job0: (groupid=0, jobs=1): err= 0: pid=2543023: Wed Nov 20 09:48:00 2024 00:09:27.406 read: IOPS=1488, BW=5954KiB/s (6097kB/s)(6180KiB/1038msec) 00:09:27.406 slat (nsec): min=6600, max=36600, avg=8584.02, stdev=2060.31 00:09:27.406 clat (usec): min=168, max=41291, avg=424.58, stdev=2732.31 00:09:27.406 lat (usec): min=175, max=41300, avg=433.16, stdev=2732.92 00:09:27.406 clat percentiles (usec): 00:09:27.406 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:09:27.406 | 30.00th=[ 212], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 255], 00:09:27.406 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:09:27.406 | 99.00th=[ 338], 99.50th=[ 457], 99.90th=[41157], 99.95th=[41157], 00:09:27.406 | 99.99th=[41157] 00:09:27.406 write: IOPS=1973, BW=7892KiB/s (8082kB/s)(8192KiB/1038msec); 0 zone resets 00:09:27.406 slat (nsec): min=9329, max=38638, avg=11870.39, stdev=2348.86 00:09:27.406 clat (usec): min=110, max=3409, avg=161.28, stdev=77.37 00:09:27.406 lat (usec): min=120, max=3430, avg=173.15, stdev=77.91 00:09:27.406 clat percentiles (usec): 00:09:27.406 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 137], 00:09:27.406 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:09:27.406 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:09:27.406 | 99.00th=[ 247], 99.50th=[ 277], 99.90th=[ 416], 99.95th=[ 441], 00:09:27.406 | 99.99th=[ 3425] 00:09:27.406 bw ( KiB/s): min= 7272, max= 9112, per=34.60%, avg=8192.00, stdev=1301.08, samples=2 00:09:27.406 iops : min= 1818, max= 2278, avg=2048.00, stdev=325.27, samples=2 00:09:27.406 lat (usec) : 250=79.52%, 500=20.26% 00:09:27.406 lat (msec) : 4=0.03%, 50=0.19% 00:09:27.406 cpu : usr=2.31%, sys=4.82%, ctx=3595, majf=0, minf=1 00:09:27.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.406 issued rwts: total=1545,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.406 job1: (groupid=0, jobs=1): err= 0: pid=2543024: Wed Nov 20 09:48:00 2024 00:09:27.406 read: IOPS=1012, BW=4051KiB/s (4148kB/s)(4148KiB/1024msec) 00:09:27.406 slat (nsec): min=6686, max=29467, avg=8095.44, stdev=2045.76 00:09:27.406 clat (usec): min=163, max=42060, avg=707.75, stdev=4545.64 00:09:27.406 lat (usec): min=171, max=42076, avg=715.85, stdev=4547.08 00:09:27.406 clat percentiles (usec): 00:09:27.406 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:27.406 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:09:27.406 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:09:27.406 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:27.406 | 99.99th=[42206] 00:09:27.406 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:09:27.406 slat (nsec): min=8991, max=40978, avg=11423.64, stdev=3548.05 00:09:27.406 clat (usec): min=119, max=449, avg=167.73, stdev=36.15 00:09:27.406 lat (usec): min=129, max=465, avg=179.16, stdev=36.58 00:09:27.406 clat percentiles (usec): 00:09:27.406 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:09:27.406 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 165], 00:09:27.406 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 245], 00:09:27.406 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 347], 99.95th=[ 449], 00:09:27.406 | 99.99th=[ 449] 00:09:27.406 bw ( KiB/s): min=12288, max=12288, per=51.90%, avg=12288.00, stdev= 0.00, samples=1 00:09:27.406 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:27.406 lat (usec) : 250=97.20%, 500=2.25% 00:09:27.406 lat (msec) : 4=0.04%, 50=0.51% 00:09:27.406 cpu : usr=1.17%, sys=2.64%, ctx=2573, majf=0, minf=2 00:09:27.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.406 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.406 job2: (groupid=0, jobs=1): err= 0: pid=2543025: Wed Nov 20 09:48:00 2024 00:09:27.406 read: IOPS=1025, BW=4103KiB/s (4202kB/s)(4140KiB/1009msec) 00:09:27.407 slat (nsec): min=6482, max=23312, avg=7666.28, stdev=1788.17 00:09:27.407 clat (usec): min=196, max=41319, avg=699.15, stdev=4178.56 00:09:27.407 lat (usec): min=203, max=41336, avg=706.82, stdev=4179.92 00:09:27.407 clat percentiles (usec): 00:09:27.407 | 1.00th=[ 217], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:09:27.407 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:09:27.407 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 310], 00:09:27.407 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:27.407 | 99.99th=[41157] 00:09:27.407 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:09:27.407 slat (nsec): min=8978, max=39256, avg=10350.00, stdev=1509.56 00:09:27.407 clat (usec): min=120, max=321, avg=166.67, stdev=23.91 00:09:27.407 lat (usec): min=129, max=361, avg=177.02, stdev=24.22 00:09:27.407 clat percentiles (usec): 00:09:27.407 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:27.407 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:09:27.407 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 206], 00:09:27.407 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 318], 99.95th=[ 322], 00:09:27.407 | 99.99th=[ 322] 00:09:27.407 bw ( KiB/s): min= 2848, max= 9440, per=25.95%, avg=6144.00, stdev=4661.25, samples=2 00:09:27.407 iops : min= 712, max= 2360, avg=1536.00, stdev=1165.31, samples=2 00:09:27.407 lat (usec) : 250=68.57%, 500=31.00% 00:09:27.407 lat (msec) : 50=0.43% 00:09:27.407 cpu : usr=1.59%, sys=2.08%, ctx=2571, majf=0, minf=1 00:09:27.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.407 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.407 job3: (groupid=0, jobs=1): err= 0: pid=2543027: Wed Nov 20 09:48:00 2024 00:09:27.407 read: IOPS=572, BW=2290KiB/s (2345kB/s)(2292KiB/1001msec) 00:09:27.407 slat (nsec): min=7332, max=35346, avg=8688.03, stdev=3030.62 00:09:27.407 clat (usec): min=164, max=41960, avg=1422.86, stdev=6875.22 00:09:27.407 lat (usec): min=172, max=41983, avg=1431.55, stdev=6877.33 00:09:27.407 clat percentiles (usec): 00:09:27.407 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:09:27.407 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:09:27.407 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 235], 00:09:27.407 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:27.407 | 99.99th=[42206] 00:09:27.407 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:27.407 slat (nsec): min=9662, max=50682, avg=10951.18, stdev=1962.38 00:09:27.407 clat (usec): min=116, max=340, avg=157.75, stdev=26.47 00:09:27.407 lat (usec): min=128, max=374, avg=168.70, stdev=26.87 00:09:27.407 clat percentiles (usec): 00:09:27.407 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 133], 00:09:27.407 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 157], 60.00th=[ 163], 00:09:27.407 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:09:27.407 | 99.00th=[ 237], 99.50th=[ 265], 99.90th=[ 338], 99.95th=[ 343], 00:09:27.407 | 99.99th=[ 343] 00:09:27.407 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.407 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.407 lat (usec) : 250=98.06%, 500=0.75% 00:09:27.407 lat (msec) : 10=0.06%, 20=0.06%, 50=1.06% 00:09:27.407 cpu : usr=0.90%, sys=1.50%, ctx=1599, majf=0, minf=1 00:09:27.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.407 issued rwts: total=573,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.407 00:09:27.407 Run status group 0 (all jobs): 00:09:27.407 READ: bw=15.8MiB/s (16.5MB/s), 2290KiB/s-5954KiB/s (2345kB/s-6097kB/s), io=16.4MiB (17.2MB), run=1001-1038msec 00:09:27.407 WRITE: bw=23.1MiB/s (24.2MB/s), 4092KiB/s-7892KiB/s (4190kB/s-8082kB/s), io=24.0MiB (25.2MB), run=1001-1038msec 00:09:27.407 00:09:27.407 Disk stats (read/write): 00:09:27.407 nvme0n1: ios=1573/2048, merge=0/0, ticks=1325/314, in_queue=1639, util=99.20% 00:09:27.407 nvme0n2: ios=1032/1536, merge=0/0, ticks=520/240, in_queue=760, util=86.90% 00:09:27.407 nvme0n3: ios=1068/1536, merge=0/0, ticks=616/245, in_queue=861, util=90.63% 00:09:27.407 nvme0n4: ios=282/512, merge=0/0, ticks=1600/86, in_queue=1686, util=98.11% 00:09:27.407 09:48:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:27.407 [global] 00:09:27.407 thread=1 00:09:27.407 invalidate=1 00:09:27.407 rw=write 00:09:27.407 time_based=1 00:09:27.407 runtime=1 00:09:27.407 ioengine=libaio 00:09:27.407 direct=1 00:09:27.407 bs=4096 00:09:27.407 iodepth=128 00:09:27.407 norandommap=0 00:09:27.407 numjobs=1 00:09:27.407 00:09:27.407 verify_dump=1 00:09:27.407 verify_backlog=512 00:09:27.407 verify_state_save=0 00:09:27.407 do_verify=1 00:09:27.407 verify=crc32c-intel 00:09:27.407 [job0] 00:09:27.407 filename=/dev/nvme0n1 00:09:27.407 [job1] 00:09:27.407 filename=/dev/nvme0n2 00:09:27.407 [job2] 00:09:27.407 filename=/dev/nvme0n3 00:09:27.407 [job3] 00:09:27.407 filename=/dev/nvme0n4 00:09:27.407 Could not set queue depth (nvme0n1) 00:09:27.407 Could not set queue depth (nvme0n2) 00:09:27.407 Could not set queue depth (nvme0n3) 00:09:27.407 Could not set queue depth (nvme0n4) 00:09:27.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.664 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.664 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.664 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.664 fio-3.35 00:09:27.664 Starting 4 threads 00:09:29.037 00:09:29.037 job0: (groupid=0, jobs=1): err= 0: pid=2543457: Wed Nov 20 09:48:02 2024 00:09:29.037 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:29.037 slat (nsec): min=1360, max=21501k, avg=93124.84, stdev=699917.16 00:09:29.037 clat (usec): min=2305, max=49182, avg=12706.45, stdev=6046.88 00:09:29.037 lat (usec): min=2314, max=49237, avg=12799.58, stdev=6104.95 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 3195], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9503], 00:09:29.037 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:09:29.037 | 70.00th=[13435], 80.00th=[15139], 90.00th=[20579], 95.00th=[24511], 00:09:29.037 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[42730], 00:09:29.037 | 99.99th=[49021] 00:09:29.037 write: IOPS=5035, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1007msec); 0 zone resets 00:09:29.037 slat (usec): min=2, max=21065, avg=102.39, stdev=757.58 00:09:29.037 clat (usec): min=298, max=54940, avg=13651.12, stdev=7218.58 00:09:29.037 lat (usec): min=479, max=54991, avg=13753.51, stdev=7288.02 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 889], 5.00th=[ 6652], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:29.037 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10945], 60.00th=[12125], 00:09:29.037 | 70.00th=[14484], 80.00th=[16450], 90.00th=[21103], 95.00th=[32113], 00:09:29.037 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[49546], 00:09:29.037 | 99.99th=[54789] 00:09:29.037 bw ( KiB/s): min=15096, max=24456, per=26.83%, avg=19776.00, stdev=6618.52, samples=2 00:09:29.037 iops : min= 3774, max= 6114, avg=4944.00, stdev=1654.63, samples=2 00:09:29.037 lat (usec) : 500=0.04%, 1000=0.49% 00:09:29.037 lat (msec) : 2=0.60%, 4=1.12%, 10=28.30%, 20=57.72%, 50=11.73% 00:09:29.037 lat (msec) : 100=0.01% 00:09:29.037 cpu : usr=4.27%, sys=5.27%, ctx=424, majf=0, minf=1 00:09:29.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:29.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.037 issued rwts: total=4608,5071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.037 job1: (groupid=0, jobs=1): err= 0: pid=2543458: Wed Nov 20 09:48:02 2024 00:09:29.037 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:09:29.037 slat (nsec): min=1282, max=32813k, avg=93287.56, stdev=773524.14 00:09:29.037 clat (usec): min=3522, max=53442, avg=11873.24, stdev=6245.75 00:09:29.037 lat (usec): min=3533, max=53450, avg=11966.53, stdev=6293.58 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 8356], 00:09:29.037 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11863], 00:09:29.037 | 70.00th=[12911], 80.00th=[13304], 90.00th=[15401], 95.00th=[18220], 00:09:29.037 | 99.00th=[46400], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:09:29.037 | 99.99th=[53216] 00:09:29.037 write: IOPS=5815, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1008msec); 0 zone resets 00:09:29.037 slat (usec): min=2, max=10079, avg=74.03, stdev=486.50 00:09:29.037 clat (usec): min=2641, max=26283, avg=10369.67, stdev=3629.74 00:09:29.037 lat (usec): min=2652, max=26295, avg=10443.71, stdev=3667.16 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 3359], 5.00th=[ 5407], 10.00th=[ 6783], 20.00th=[ 8029], 00:09:29.037 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10028], 00:09:29.037 | 70.00th=[10945], 80.00th=[11994], 90.00th=[15008], 95.00th=[17957], 00:09:29.037 | 99.00th=[22938], 99.50th=[23725], 99.90th=[25560], 99.95th=[26346], 00:09:29.037 | 99.99th=[26346] 00:09:29.037 bw ( KiB/s): min=20480, max=25400, per=31.12%, avg=22940.00, stdev=3478.97, samples=2 00:09:29.037 iops : min= 5120, max= 6350, avg=5735.00, stdev=869.74, samples=2 00:09:29.037 lat (msec) : 4=1.39%, 10=50.08%, 20=44.46%, 50=3.66%, 100=0.41% 00:09:29.037 cpu : usr=5.36%, sys=5.96%, ctx=549, majf=0, minf=1 00:09:29.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:29.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.037 issued rwts: total=5632,5862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.037 job2: (groupid=0, jobs=1): err= 0: pid=2543462: Wed Nov 20 09:48:02 2024 00:09:29.037 read: IOPS=3473, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1010msec) 00:09:29.037 slat (nsec): min=1126, max=19367k, avg=134937.55, stdev=975181.07 00:09:29.037 clat (usec): min=6251, max=85212, avg=17039.19, stdev=10203.37 00:09:29.037 lat (usec): min=8136, max=85247, avg=17174.13, stdev=10306.76 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10945], 20.00th=[11469], 00:09:29.037 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13304], 60.00th=[14484], 00:09:29.037 | 70.00th=[16319], 80.00th=[19530], 90.00th=[29492], 95.00th=[31065], 00:09:29.037 | 99.00th=[67634], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:09:29.037 | 99.99th=[85459] 00:09:29.037 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:09:29.037 slat (nsec): min=1919, max=22095k, avg=141029.44, stdev=1065700.44 00:09:29.037 clat (usec): min=6967, max=79121, avg=18766.63, stdev=13715.38 00:09:29.037 lat (usec): min=7004, max=79145, avg=18907.66, stdev=13814.72 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10159], 00:09:29.037 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12387], 60.00th=[12911], 00:09:29.037 | 70.00th=[16712], 80.00th=[28967], 90.00th=[40633], 95.00th=[51119], 00:09:29.037 | 99.00th=[64226], 99.50th=[67634], 99.90th=[67634], 99.95th=[69731], 00:09:29.037 | 99.99th=[79168] 00:09:29.037 bw ( KiB/s): min=10320, max=18352, per=19.45%, avg=14336.00, stdev=5679.48, samples=2 00:09:29.037 iops : min= 2580, max= 4588, avg=3584.00, stdev=1419.87, samples=2 00:09:29.037 lat (msec) : 10=10.69%, 20=67.15%, 50=18.09%, 100=4.08% 00:09:29.037 cpu : usr=2.38%, sys=4.46%, ctx=245, majf=0, minf=1 00:09:29.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:29.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.037 issued rwts: total=3508,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.037 job3: (groupid=0, jobs=1): err= 0: pid=2543463: Wed Nov 20 09:48:02 2024 00:09:29.037 read: IOPS=3587, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1010msec) 00:09:29.037 slat (nsec): min=1580, max=16373k, avg=109020.71, stdev=750166.58 00:09:29.037 clat (usec): min=5386, max=46000, avg=13200.97, stdev=5511.00 00:09:29.037 lat (usec): min=6950, max=46028, avg=13309.99, stdev=5585.03 00:09:29.037 clat percentiles (usec): 00:09:29.037 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10421], 00:09:29.037 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:09:29.037 | 70.00th=[12518], 80.00th=[13829], 90.00th=[20841], 95.00th=[28181], 00:09:29.037 | 99.00th=[32113], 99.50th=[32113], 99.90th=[39584], 99.95th=[42206], 00:09:29.037 | 99.99th=[45876] 00:09:29.037 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:09:29.037 slat (usec): min=2, max=12562, avg=142.10, stdev=785.61 00:09:29.037 clat (msec): min=5, max=126, avg=19.47, stdev=20.39 00:09:29.037 lat (msec): min=6, max=126, avg=19.61, stdev=20.52 00:09:29.037 clat percentiles (msec): 00:09:29.037 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 11], 00:09:29.037 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:09:29.037 | 70.00th=[ 15], 80.00th=[ 19], 90.00th=[ 43], 95.00th=[ 59], 00:09:29.037 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:09:29.037 | 99.99th=[ 127] 00:09:29.037 bw ( KiB/s): min=11200, max=20856, per=21.74%, avg=16028.00, stdev=6827.82, samples=2 00:09:29.037 iops : min= 2800, max= 5214, avg=4007.00, stdev=1706.96, samples=2 00:09:29.037 lat (msec) : 10=10.48%, 20=74.66%, 50=11.69%, 100=1.94%, 250=1.23% 00:09:29.037 cpu : usr=3.27%, sys=5.35%, ctx=424, majf=0, minf=1 00:09:29.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:29.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.037 issued rwts: total=3623,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.037 00:09:29.037 Run status group 0 (all jobs): 00:09:29.037 READ: bw=67.2MiB/s (70.4MB/s), 13.6MiB/s-21.8MiB/s (14.2MB/s-22.9MB/s), io=67.9MiB (71.2MB), run=1007-1010msec 00:09:29.037 WRITE: bw=72.0MiB/s (75.5MB/s), 13.9MiB/s-22.7MiB/s (14.5MB/s-23.8MB/s), io=72.7MiB (76.2MB), run=1007-1010msec 00:09:29.037 00:09:29.037 Disk stats (read/write): 00:09:29.037 nvme0n1: ios=4474/4608, merge=0/0, ticks=23304/27891, in_queue=51195, util=86.77% 00:09:29.037 nvme0n2: ios=4389/4608, merge=0/0, ticks=48012/46019, in_queue=94031, util=89.23% 00:09:29.037 nvme0n3: ios=2617/2906, merge=0/0, ticks=15844/18978, in_queue=34822, util=93.33% 00:09:29.037 nvme0n4: ios=3641/3911, merge=0/0, ticks=23737/25609, in_queue=49346, util=92.86% 00:09:29.038 09:48:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:29.038 [global] 00:09:29.038 thread=1 00:09:29.038 invalidate=1 00:09:29.038 rw=randwrite 00:09:29.038 time_based=1 00:09:29.038 runtime=1 00:09:29.038 ioengine=libaio 00:09:29.038 direct=1 00:09:29.038 bs=4096 00:09:29.038 iodepth=128 00:09:29.038 norandommap=0 00:09:29.038 numjobs=1 00:09:29.038 00:09:29.038 verify_dump=1 00:09:29.038 verify_backlog=512 00:09:29.038 verify_state_save=0 00:09:29.038 do_verify=1 00:09:29.038 verify=crc32c-intel 00:09:29.038 [job0] 00:09:29.038 filename=/dev/nvme0n1 00:09:29.038 [job1] 00:09:29.038 filename=/dev/nvme0n2 00:09:29.038 [job2] 00:09:29.038 filename=/dev/nvme0n3 00:09:29.038 [job3] 00:09:29.038 filename=/dev/nvme0n4 00:09:29.038 Could not set queue depth (nvme0n1) 00:09:29.038 Could not set queue depth (nvme0n2) 00:09:29.038 Could not set queue depth (nvme0n3) 00:09:29.038 Could not set queue depth (nvme0n4) 00:09:29.038 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.038 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.038 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.038 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.038 fio-3.35 00:09:29.038 Starting 4 threads 00:09:30.412 00:09:30.412 job0: (groupid=0, jobs=1): err= 0: pid=2543895: Wed Nov 20 09:48:03 2024 00:09:30.412 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:09:30.412 slat (nsec): min=1252, max=4589.9k, avg=76158.16, stdev=401577.66 00:09:30.412 clat (usec): min=5841, max=26803, avg=9968.50, stdev=1724.28 00:09:30.412 lat (usec): min=5844, max=26810, avg=10044.66, stdev=1748.68 00:09:30.412 clat percentiles (usec): 00:09:30.412 | 1.00th=[ 6718], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:09:30.412 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:09:30.412 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[11994], 00:09:30.412 | 99.00th=[14353], 99.50th=[16450], 99.90th=[26608], 99.95th=[26608], 00:09:30.412 | 99.99th=[26870] 00:09:30.412 write: IOPS=5846, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1007msec); 0 zone resets 00:09:30.412 slat (usec): min=2, max=18828, avg=92.06, stdev=572.49 00:09:30.412 clat (usec): min=3969, max=56982, avg=11968.59, stdev=6338.97 00:09:30.412 lat (usec): min=5794, max=57016, avg=12060.65, stdev=6390.71 00:09:30.412 clat percentiles (usec): 00:09:30.412 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8225], 00:09:30.412 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:09:30.412 | 70.00th=[10552], 80.00th=[12780], 90.00th=[18220], 95.00th=[26084], 00:09:30.412 | 99.00th=[38536], 99.50th=[45876], 99.90th=[46400], 99.95th=[49021], 00:09:30.412 | 99.99th=[56886] 00:09:30.412 bw ( KiB/s): min=18088, max=27984, per=33.14%, avg=23036.00, stdev=6997.53, samples=2 00:09:30.412 iops : min= 4522, max= 6996, avg=5759.00, stdev=1749.38, samples=2 00:09:30.412 lat (msec) : 4=0.01%, 10=45.14%, 20=50.39%, 50=4.45%, 100=0.01% 00:09:30.412 cpu : usr=4.17%, sys=6.26%, ctx=633, majf=0, minf=1 00:09:30.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:30.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.413 issued rwts: total=5632,5887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.413 job1: (groupid=0, jobs=1): err= 0: pid=2543896: Wed Nov 20 09:48:03 2024 00:09:30.413 read: IOPS=2836, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1006msec) 00:09:30.413 slat (nsec): min=1560, max=19509k, avg=174988.73, stdev=1145662.33 00:09:30.413 clat (usec): min=2211, max=61361, avg=22553.48, stdev=12073.83 00:09:30.413 lat (usec): min=6023, max=61388, avg=22728.46, stdev=12173.88 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 6390], 5.00th=[12256], 10.00th=[13042], 20.00th=[13829], 00:09:30.413 | 30.00th=[14615], 40.00th=[15533], 50.00th=[16319], 60.00th=[17695], 00:09:30.413 | 70.00th=[28181], 80.00th=[36963], 90.00th=[40109], 95.00th=[45351], 00:09:30.413 | 99.00th=[55837], 99.50th=[55837], 99.90th=[57934], 99.95th=[59507], 00:09:30.413 | 99.99th=[61604] 00:09:30.413 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:09:30.413 slat (usec): min=2, max=35036, avg=156.37, stdev=1045.51 00:09:30.413 clat (usec): min=7559, max=68891, avg=20421.41, stdev=10743.70 00:09:30.413 lat (usec): min=7574, max=68923, avg=20577.78, stdev=10826.22 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11863], 20.00th=[13304], 00:09:30.413 | 30.00th=[13566], 40.00th=[13829], 50.00th=[15270], 60.00th=[17957], 00:09:30.413 | 70.00th=[22152], 80.00th=[28705], 90.00th=[34866], 95.00th=[40109], 00:09:30.413 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:09:30.413 | 99.99th=[68682] 00:09:30.413 bw ( KiB/s): min=12288, max=12288, per=17.68%, avg=12288.00, stdev= 0.00, samples=2 00:09:30.413 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:30.413 lat (msec) : 4=0.02%, 10=1.40%, 20=64.97%, 50=30.95%, 100=2.67% 00:09:30.413 cpu : usr=4.28%, sys=3.28%, ctx=264, majf=0, minf=1 00:09:30.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:30.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.413 issued rwts: total=2854,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.413 job2: (groupid=0, jobs=1): err= 0: pid=2543898: Wed Nov 20 09:48:03 2024 00:09:30.413 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:09:30.413 slat (nsec): min=1499, max=13557k, avg=128338.54, stdev=841384.96 00:09:30.413 clat (usec): min=5937, max=45542, avg=15153.67, stdev=6644.08 00:09:30.413 lat (usec): min=5945, max=45549, avg=15282.01, stdev=6709.32 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[10814], 20.00th=[11600], 00:09:30.413 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[13566], 00:09:30.413 | 70.00th=[14091], 80.00th=[17695], 90.00th=[22414], 95.00th=[31589], 00:09:30.413 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:09:30.413 | 99.99th=[45351] 00:09:30.413 write: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1009msec); 0 zone resets 00:09:30.413 slat (usec): min=2, max=12277, avg=165.48, stdev=754.76 00:09:30.413 clat (usec): min=2499, max=61419, avg=23490.98, stdev=11674.68 00:09:30.413 lat (usec): min=2508, max=61427, avg=23656.46, stdev=11746.19 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 5735], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[12649], 00:09:30.413 | 30.00th=[15533], 40.00th=[18220], 50.00th=[21365], 60.00th=[23987], 00:09:30.413 | 70.00th=[29230], 80.00th=[33817], 90.00th=[37487], 95.00th=[45876], 00:09:30.413 | 99.00th=[57934], 99.50th=[60556], 99.90th=[61604], 99.95th=[61604], 00:09:30.413 | 99.99th=[61604] 00:09:30.413 bw ( KiB/s): min=12288, max=14328, per=19.15%, avg=13308.00, stdev=1442.50, samples=2 00:09:30.413 iops : min= 3072, max= 3582, avg=3327.00, stdev=360.62, samples=2 00:09:30.413 lat (msec) : 4=0.18%, 10=4.81%, 20=59.89%, 50=33.19%, 100=1.93% 00:09:30.413 cpu : usr=2.88%, sys=3.97%, ctx=386, majf=0, minf=2 00:09:30.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:30.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.413 issued rwts: total=3072,3455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.413 job3: (groupid=0, jobs=1): err= 0: pid=2543900: Wed Nov 20 09:48:03 2024 00:09:30.413 read: IOPS=4945, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:09:30.413 slat (nsec): min=1581, max=18175k, avg=100227.14, stdev=731416.78 00:09:30.413 clat (usec): min=3711, max=37869, avg=12834.21, stdev=4021.15 00:09:30.413 lat (usec): min=5227, max=37894, avg=12934.43, stdev=4080.77 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10552], 00:09:30.413 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[12125], 00:09:30.413 | 70.00th=[13173], 80.00th=[14746], 90.00th=[17695], 95.00th=[20317], 00:09:30.413 | 99.00th=[29754], 99.50th=[29754], 99.90th=[29754], 99.95th=[35914], 00:09:30.413 | 99.99th=[38011] 00:09:30.413 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:09:30.413 slat (usec): min=2, max=13224, avg=87.28, stdev=638.88 00:09:30.413 clat (usec): min=1601, max=35941, avg=12358.59, stdev=5399.21 00:09:30.413 lat (usec): min=1719, max=35948, avg=12445.87, stdev=5452.70 00:09:30.413 clat percentiles (usec): 00:09:30.413 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 7701], 20.00th=[ 9241], 00:09:30.413 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:09:30.413 | 70.00th=[12387], 80.00th=[14222], 90.00th=[18220], 95.00th=[24249], 00:09:30.413 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:09:30.413 | 99.99th=[35914] 00:09:30.413 bw ( KiB/s): min=20480, max=20480, per=29.46%, avg=20480.00, stdev= 0.00, samples=2 00:09:30.413 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:30.413 lat (msec) : 2=0.07%, 4=0.19%, 10=18.58%, 20=74.72%, 50=6.44% 00:09:30.413 cpu : usr=4.57%, sys=7.46%, ctx=354, majf=0, minf=1 00:09:30.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:30.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.413 issued rwts: total=4980,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.413 00:09:30.413 Run status group 0 (all jobs): 00:09:30.413 READ: bw=64.0MiB/s (67.1MB/s), 11.1MiB/s-21.8MiB/s (11.6MB/s-22.9MB/s), io=64.6MiB (67.7MB), run=1006-1009msec 00:09:30.413 WRITE: bw=67.9MiB/s (71.2MB/s), 11.9MiB/s-22.8MiB/s (12.5MB/s-23.9MB/s), io=68.5MiB (71.8MB), run=1006-1009msec 00:09:30.413 00:09:30.413 Disk stats (read/write): 00:09:30.413 nvme0n1: ios=5144/5164, merge=0/0, ticks=17552/17961, in_queue=35513, util=98.00% 00:09:30.413 nvme0n2: ios=2223/2560, merge=0/0, ticks=19381/14842, in_queue=34223, util=99.70% 00:09:30.413 nvme0n3: ios=2560/2703, merge=0/0, ticks=37050/60998, in_queue=98048, util=88.85% 00:09:30.413 nvme0n4: ios=4387/4608, merge=0/0, ticks=46015/40894, in_queue=86909, util=99.79% 00:09:30.413 09:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:30.413 09:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2544129 00:09:30.413 09:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:30.413 09:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:30.413 [global] 00:09:30.413 thread=1 00:09:30.413 invalidate=1 00:09:30.413 rw=read 00:09:30.413 time_based=1 00:09:30.413 runtime=10 00:09:30.413 ioengine=libaio 00:09:30.413 direct=1 00:09:30.413 bs=4096 00:09:30.413 iodepth=1 00:09:30.413 norandommap=1 00:09:30.413 numjobs=1 00:09:30.413 00:09:30.413 [job0] 00:09:30.413 filename=/dev/nvme0n1 00:09:30.413 [job1] 00:09:30.413 filename=/dev/nvme0n2 00:09:30.413 [job2] 00:09:30.413 filename=/dev/nvme0n3 00:09:30.413 [job3] 00:09:30.413 filename=/dev/nvme0n4 00:09:30.413 Could not set queue depth (nvme0n1) 00:09:30.413 Could not set queue depth (nvme0n2) 00:09:30.413 Could not set queue depth (nvme0n3) 00:09:30.413 Could not set queue depth (nvme0n4) 00:09:30.670 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.670 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.670 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.670 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.670 fio-3.35 00:09:30.670 Starting 4 threads 00:09:33.228 09:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:33.486 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=31891456, buflen=4096 00:09:33.486 fio: pid=2544272, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.486 09:48:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:33.743 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:09:33.743 fio: pid=2544271, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.743 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.743 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:34.002 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.002 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:34.002 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5599232, buflen=4096 00:09:34.002 fio: pid=2544269, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.261 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.261 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:34.261 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=27512832, buflen=4096 00:09:34.261 fio: pid=2544270, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.261 00:09:34.261 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2544269: Wed Nov 20 09:48:07 2024 00:09:34.261 read: IOPS=430, BW=1719KiB/s (1761kB/s)(5468KiB/3180msec) 00:09:34.261 slat (usec): min=4, max=22742, avg=24.08, stdev=614.68 00:09:34.261 clat (usec): min=162, max=42187, avg=2284.21, stdev=8982.33 00:09:34.261 lat (usec): min=167, max=42195, avg=2308.29, stdev=9000.80 00:09:34.261 clat percentiles (usec): 00:09:34.261 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 182], 00:09:34.261 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:09:34.261 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 239], 95.00th=[39060], 00:09:34.261 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:34.261 | 99.99th=[42206] 00:09:34.261 bw ( KiB/s): min= 112, max= 5016, per=8.93%, avg=1680.50, stdev=2248.56, samples=6 00:09:34.261 iops : min= 28, max= 1254, avg=420.00, stdev=561.98, samples=6 00:09:34.261 lat (usec) : 250=91.59%, 500=3.07%, 750=0.07%, 1000=0.07% 00:09:34.261 lat (msec) : 50=5.12% 00:09:34.261 cpu : usr=0.13%, sys=0.35%, ctx=1370, majf=0, minf=1 00:09:34.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.261 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2544270: Wed Nov 20 09:48:07 2024 00:09:34.261 read: IOPS=1982, BW=7930KiB/s (8121kB/s)(26.2MiB/3388msec) 00:09:34.261 slat (usec): min=5, max=20968, avg=13.40, stdev=312.56 00:09:34.261 clat (usec): min=147, max=42084, avg=486.44, stdev=3456.76 00:09:34.261 lat (usec): min=154, max=63053, avg=499.85, stdev=3544.41 00:09:34.261 clat percentiles (usec): 00:09:34.261 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:09:34.261 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:09:34.261 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 225], 00:09:34.261 | 99.00th=[ 273], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:34.261 | 99.99th=[42206] 00:09:34.261 bw ( KiB/s): min= 96, max=19912, per=45.97%, avg=8652.83, stdev=8889.77, samples=6 00:09:34.261 iops : min= 24, max= 4978, avg=2163.17, stdev=2222.44, samples=6 00:09:34.261 lat (usec) : 250=98.33%, 500=0.89%, 750=0.04% 00:09:34.261 lat (msec) : 50=0.71% 00:09:34.261 cpu : usr=0.56%, sys=1.68%, ctx=6722, majf=0, minf=1 00:09:34.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.261 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2544271: Wed Nov 20 09:48:07 2024 00:09:34.261 read: IOPS=24, BW=98.0KiB/s (100kB/s)(288KiB/2939msec) 00:09:34.261 slat (nsec): min=10587, max=36417, avg=19644.93, stdev=4972.15 00:09:34.261 clat (usec): min=522, max=44011, avg=40489.88, stdev=4792.23 00:09:34.261 lat (usec): min=558, max=44036, avg=40509.48, stdev=4790.29 00:09:34.261 clat percentiles (usec): 00:09:34.261 | 1.00th=[ 523], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:34.261 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:34.261 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:34.261 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:34.261 | 99.99th=[43779] 00:09:34.261 bw ( KiB/s): min= 96, max= 104, per=0.52%, avg=97.60, stdev= 3.58, samples=5 00:09:34.261 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:09:34.261 lat (usec) : 750=1.37% 00:09:34.261 lat (msec) : 50=97.26% 00:09:34.261 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=2 00:09:34.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.261 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2544272: Wed Nov 20 09:48:07 2024 00:09:34.261 read: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(30.4MiB/2727msec) 00:09:34.261 slat (nsec): min=6346, max=31962, avg=7390.52, stdev=1431.27 00:09:34.261 clat (usec): min=159, max=42107, avg=338.90, stdev=2447.24 00:09:34.261 lat (usec): min=167, max=42130, avg=346.29, stdev=2448.11 00:09:34.261 clat percentiles (usec): 00:09:34.261 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:09:34.261 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:09:34.261 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:09:34.261 | 99.00th=[ 237], 99.50th=[ 306], 99.90th=[41157], 99.95th=[41157], 00:09:34.261 | 99.99th=[42206] 00:09:34.261 bw ( KiB/s): min= 112, max=20424, per=66.14%, avg=12449.60, stdev=10429.68, samples=5 00:09:34.261 iops : min= 28, max= 5106, avg=3112.40, stdev=2607.42, samples=5 00:09:34.261 lat (usec) : 250=99.32%, 500=0.31% 00:09:34.261 lat (msec) : 50=0.36% 00:09:34.261 cpu : usr=0.51%, sys=2.79%, ctx=7787, majf=0, minf=2 00:09:34.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.261 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.262 issued rwts: total=7787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.262 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.262 00:09:34.262 Run status group 0 (all jobs): 00:09:34.262 READ: bw=18.4MiB/s (19.3MB/s), 98.0KiB/s-11.2MiB/s (100kB/s-11.7MB/s), io=62.3MiB (65.3MB), run=2727-3388msec 00:09:34.262 00:09:34.262 Disk stats (read/write): 00:09:34.262 nvme0n1: ios=1365/0, merge=0/0, ticks=3045/0, in_queue=3045, util=95.07% 00:09:34.262 nvme0n2: ios=6716/0, merge=0/0, ticks=3212/0, in_queue=3212, util=95.29% 00:09:34.262 nvme0n3: ios=70/0, merge=0/0, ticks=2835/0, in_queue=2835, util=96.55% 00:09:34.262 nvme0n4: ios=7783/0, merge=0/0, ticks=2488/0, in_queue=2488, util=96.41% 00:09:34.519 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.520 09:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:34.520 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.520 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:34.777 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.777 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:35.035 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.035 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2544129 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:35.293 nvmf hotplug test: fio failed as expected 00:09:35.293 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.551 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:35.551 09:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.551 rmmod nvme_tcp 00:09:35.551 rmmod nvme_fabrics 00:09:35.551 rmmod nvme_keyring 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2541219 ']' 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2541219 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2541219 ']' 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2541219 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.551 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541219 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541219' 00:09:35.809 killing process with pid 2541219 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2541219 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2541219 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.809 09:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.344 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.345 00:09:38.345 real 0m26.848s 00:09:38.345 user 1m46.480s 00:09:38.345 sys 0m8.453s 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.345 ************************************ 00:09:38.345 END TEST nvmf_fio_target 00:09:38.345 ************************************ 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.345 ************************************ 00:09:38.345 START TEST nvmf_bdevio 00:09:38.345 ************************************ 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.345 * Looking for test storage... 00:09:38.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.345 --rc genhtml_branch_coverage=1 00:09:38.345 --rc genhtml_function_coverage=1 00:09:38.345 --rc genhtml_legend=1 00:09:38.345 --rc geninfo_all_blocks=1 00:09:38.345 --rc geninfo_unexecuted_blocks=1 00:09:38.345 00:09:38.345 ' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.345 --rc genhtml_branch_coverage=1 00:09:38.345 --rc genhtml_function_coverage=1 00:09:38.345 --rc genhtml_legend=1 00:09:38.345 --rc geninfo_all_blocks=1 00:09:38.345 --rc geninfo_unexecuted_blocks=1 00:09:38.345 00:09:38.345 ' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.345 --rc genhtml_branch_coverage=1 00:09:38.345 --rc genhtml_function_coverage=1 00:09:38.345 --rc genhtml_legend=1 00:09:38.345 --rc geninfo_all_blocks=1 00:09:38.345 --rc geninfo_unexecuted_blocks=1 00:09:38.345 00:09:38.345 ' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.345 --rc genhtml_branch_coverage=1 00:09:38.345 --rc genhtml_function_coverage=1 00:09:38.345 --rc genhtml_legend=1 00:09:38.345 --rc geninfo_all_blocks=1 00:09:38.345 --rc geninfo_unexecuted_blocks=1 00:09:38.345 00:09:38.345 ' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.345 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.346 09:48:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.916 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:44.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:44.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:44.917 Found net devices under 0000:86:00.0: cvl_0_0 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:44.917 Found net devices under 0000:86:00.1: cvl_0_1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:09:44.917 00:09:44.917 --- 10.0.0.2 ping statistics --- 00:09:44.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.917 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:09:44.917 00:09:44.917 --- 10.0.0.1 ping statistics --- 00:09:44.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.917 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2549128 00:09:44.917 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2549128 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2549128 ']' 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 [2024-11-20 09:48:17.724749] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:09:44.918 [2024-11-20 09:48:17.724802] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.918 [2024-11-20 09:48:17.803471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.918 [2024-11-20 09:48:17.844221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.918 [2024-11-20 09:48:17.844260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.918 [2024-11-20 09:48:17.844268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.918 [2024-11-20 09:48:17.844274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.918 [2024-11-20 09:48:17.844280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.918 [2024-11-20 09:48:17.845768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.918 [2024-11-20 09:48:17.845877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:44.918 [2024-11-20 09:48:17.845988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.918 [2024-11-20 09:48:17.845993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 [2024-11-20 09:48:17.993746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.918 09:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 Malloc0 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.918 [2024-11-20 09:48:18.056487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.918 { 00:09:44.918 "params": { 00:09:44.918 "name": "Nvme$subsystem", 00:09:44.918 "trtype": "$TEST_TRANSPORT", 00:09:44.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.918 "adrfam": "ipv4", 00:09:44.918 "trsvcid": "$NVMF_PORT", 00:09:44.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.918 "hdgst": ${hdgst:-false}, 00:09:44.918 "ddgst": ${ddgst:-false} 00:09:44.918 }, 00:09:44.918 "method": "bdev_nvme_attach_controller" 00:09:44.918 } 00:09:44.918 EOF 00:09:44.918 )") 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:44.918 09:48:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.918 "params": { 00:09:44.918 "name": "Nvme1", 00:09:44.918 "trtype": "tcp", 00:09:44.918 "traddr": "10.0.0.2", 00:09:44.918 "adrfam": "ipv4", 00:09:44.918 "trsvcid": "4420", 00:09:44.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.918 "hdgst": false, 00:09:44.918 "ddgst": false 00:09:44.918 }, 00:09:44.918 "method": "bdev_nvme_attach_controller" 00:09:44.918 }' 00:09:44.918 [2024-11-20 09:48:18.109276] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:09:44.918 [2024-11-20 09:48:18.109325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549157 ] 00:09:44.918 [2024-11-20 09:48:18.185701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.918 [2024-11-20 09:48:18.229427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.918 [2024-11-20 09:48:18.229533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.918 [2024-11-20 09:48:18.229534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.918 I/O targets: 00:09:44.918 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:44.918 00:09:44.918 00:09:44.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.918 http://cunit.sourceforge.net/ 00:09:44.918 00:09:44.918 00:09:44.918 Suite: bdevio tests on: Nvme1n1 00:09:44.918 Test: blockdev write read block ...passed 00:09:45.176 Test: blockdev write zeroes read block ...passed 00:09:45.176 Test: blockdev write zeroes read no split ...passed 00:09:45.176 Test: blockdev write zeroes read split ...passed 00:09:45.176 Test: blockdev write zeroes read split partial ...passed 00:09:45.176 Test: blockdev reset ...[2024-11-20 09:48:18.539436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:45.176 [2024-11-20 09:48:18.539501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c6340 (9): Bad file descriptor 00:09:45.176 [2024-11-20 09:48:18.553992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:45.176 passed 00:09:45.176 Test: blockdev write read 8 blocks ...passed 00:09:45.176 Test: blockdev write read size > 128k ...passed 00:09:45.176 Test: blockdev write read invalid size ...passed 00:09:45.176 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:45.176 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:45.176 Test: blockdev write read max offset ...passed 00:09:45.176 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:45.176 Test: blockdev writev readv 8 blocks ...passed 00:09:45.434 Test: blockdev writev readv 30 x 1block ...passed 00:09:45.434 Test: blockdev writev readv block ...passed 00:09:45.434 Test: blockdev writev readv size > 128k ...passed 00:09:45.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:45.434 Test: blockdev comparev and writev ...[2024-11-20 09:48:18.804865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.804894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.804909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.805767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:45.434 [2024-11-20 09:48:18.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:45.434 passed 00:09:45.434 Test: blockdev nvme passthru rw ...passed 00:09:45.434 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:48:18.887656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.434 [2024-11-20 09:48:18.887673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.887782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.434 [2024-11-20 09:48:18.887793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.887895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.434 [2024-11-20 09:48:18.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:45.434 [2024-11-20 09:48:18.888003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.434 [2024-11-20 09:48:18.888013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:45.434 passed 00:09:45.434 Test: blockdev nvme admin passthru ...passed 00:09:45.434 Test: blockdev copy ...passed 00:09:45.434 00:09:45.434 Run Summary: Type Total Ran Passed Failed Inactive 00:09:45.434 suites 1 1 n/a 0 0 00:09:45.434 tests 23 23 23 0 0 00:09:45.434 asserts 152 152 152 0 n/a 00:09:45.434 00:09:45.435 Elapsed time = 1.038 seconds 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.693 rmmod nvme_tcp 00:09:45.693 rmmod nvme_fabrics 00:09:45.693 rmmod nvme_keyring 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2549128 ']' 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2549128 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2549128 ']' 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2549128 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549128 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549128' 00:09:45.693 killing process with pid 2549128 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2549128 00:09:45.693 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2549128 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.951 09:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.496 00:09:48.496 real 0m9.997s 00:09:48.496 user 0m9.845s 00:09:48.496 sys 0m4.978s 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 ************************************ 00:09:48.496 END TEST nvmf_bdevio 00:09:48.496 ************************************ 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:48.496 00:09:48.496 real 4m38.819s 00:09:48.496 user 10m27.407s 00:09:48.496 sys 1m38.471s 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 ************************************ 00:09:48.496 END TEST nvmf_target_core 00:09:48.496 ************************************ 00:09:48.496 09:48:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:48.496 09:48:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.496 09:48:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.496 09:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.496 ************************************ 00:09:48.496 START TEST nvmf_target_extra 00:09:48.496 ************************************ 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:48.496 * Looking for test storage... 00:09:48.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.496 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.497 --rc genhtml_branch_coverage=1 00:09:48.497 --rc genhtml_function_coverage=1 00:09:48.497 --rc genhtml_legend=1 00:09:48.497 --rc geninfo_all_blocks=1 00:09:48.497 --rc geninfo_unexecuted_blocks=1 00:09:48.497 00:09:48.497 ' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.497 --rc genhtml_branch_coverage=1 00:09:48.497 --rc genhtml_function_coverage=1 00:09:48.497 --rc genhtml_legend=1 00:09:48.497 --rc geninfo_all_blocks=1 00:09:48.497 --rc geninfo_unexecuted_blocks=1 00:09:48.497 00:09:48.497 ' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.497 --rc genhtml_branch_coverage=1 00:09:48.497 --rc genhtml_function_coverage=1 00:09:48.497 --rc genhtml_legend=1 00:09:48.497 --rc geninfo_all_blocks=1 00:09:48.497 --rc geninfo_unexecuted_blocks=1 00:09:48.497 00:09:48.497 ' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.497 --rc genhtml_branch_coverage=1 00:09:48.497 --rc genhtml_function_coverage=1 00:09:48.497 --rc genhtml_legend=1 00:09:48.497 --rc geninfo_all_blocks=1 00:09:48.497 --rc geninfo_unexecuted_blocks=1 00:09:48.497 00:09:48.497 ' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:48.497 ************************************ 00:09:48.497 START TEST nvmf_example 00:09:48.497 ************************************ 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:48.497 * Looking for test storage... 00:09:48.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:48.497 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.498 --rc genhtml_branch_coverage=1 00:09:48.498 --rc genhtml_function_coverage=1 00:09:48.498 --rc genhtml_legend=1 00:09:48.498 --rc geninfo_all_blocks=1 00:09:48.498 --rc geninfo_unexecuted_blocks=1 00:09:48.498 00:09:48.498 ' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.498 --rc genhtml_branch_coverage=1 00:09:48.498 --rc genhtml_function_coverage=1 00:09:48.498 --rc genhtml_legend=1 00:09:48.498 --rc geninfo_all_blocks=1 00:09:48.498 --rc geninfo_unexecuted_blocks=1 00:09:48.498 00:09:48.498 ' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.498 --rc genhtml_branch_coverage=1 00:09:48.498 --rc genhtml_function_coverage=1 00:09:48.498 --rc genhtml_legend=1 00:09:48.498 --rc geninfo_all_blocks=1 00:09:48.498 --rc geninfo_unexecuted_blocks=1 00:09:48.498 00:09:48.498 ' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.498 --rc genhtml_branch_coverage=1 00:09:48.498 --rc genhtml_function_coverage=1 00:09:48.498 --rc genhtml_legend=1 00:09:48.498 --rc geninfo_all_blocks=1 00:09:48.498 --rc geninfo_unexecuted_blocks=1 00:09:48.498 00:09:48.498 ' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.498 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.498 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.069 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:55.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:55.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:55.070 Found net devices under 0000:86:00.0: cvl_0_0 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:55.070 Found net devices under 0000:86:00.1: cvl_0_1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:09:55.070 00:09:55.070 --- 10.0.0.2 ping statistics --- 00:09:55.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.070 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:09:55.070 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:55.070 00:09:55.070 --- 10.0.0.1 ping statistics --- 00:09:55.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.070 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.070 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2552981 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2552981 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2552981 ']' 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.071 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:55.636 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.887 Initializing NVMe Controllers 00:10:07.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:07.887 Initialization complete. Launching workers. 00:10:07.887 ======================================================== 00:10:07.887 Latency(us) 00:10:07.887 Device Information : IOPS MiB/s Average min max 00:10:07.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18365.99 71.74 3484.53 684.87 41760.17 00:10:07.887 ======================================================== 00:10:07.887 Total : 18365.99 71.74 3484.53 684.87 41760.17 00:10:07.887 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.887 rmmod nvme_tcp 00:10:07.887 rmmod nvme_fabrics 00:10:07.887 rmmod nvme_keyring 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2552981 ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2552981 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2552981 ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2552981 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552981 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552981' 00:10:07.887 killing process with pid 2552981 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2552981 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2552981 00:10:07.887 nvmf threads initialize successfully 00:10:07.887 bdev subsystem init successfully 00:10:07.887 created a nvmf target service 00:10:07.887 create targets's poll groups done 00:10:07.887 all subsystems of target started 00:10:07.887 nvmf target is running 00:10:07.887 all subsystems of target stopped 00:10:07.887 destroy targets's poll groups done 00:10:07.887 destroyed the nvmf target service 00:10:07.887 bdev subsystem finish successfully 00:10:07.887 nvmf threads destroy successfully 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.887 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.456 00:10:08.456 real 0m20.008s 00:10:08.456 user 0m46.523s 00:10:08.456 sys 0m6.176s 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.456 ************************************ 00:10:08.456 END TEST nvmf_example 00:10:08.456 ************************************ 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.456 ************************************ 00:10:08.456 START TEST nvmf_filesystem 00:10:08.456 ************************************ 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:08.456 * Looking for test storage... 00:10:08.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.456 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.721 --rc genhtml_branch_coverage=1 00:10:08.721 --rc genhtml_function_coverage=1 00:10:08.721 --rc genhtml_legend=1 00:10:08.721 --rc geninfo_all_blocks=1 00:10:08.721 --rc geninfo_unexecuted_blocks=1 00:10:08.721 00:10:08.721 ' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.721 --rc genhtml_branch_coverage=1 00:10:08.721 --rc genhtml_function_coverage=1 00:10:08.721 --rc genhtml_legend=1 00:10:08.721 --rc geninfo_all_blocks=1 00:10:08.721 --rc geninfo_unexecuted_blocks=1 00:10:08.721 00:10:08.721 ' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.721 --rc genhtml_branch_coverage=1 00:10:08.721 --rc genhtml_function_coverage=1 00:10:08.721 --rc genhtml_legend=1 00:10:08.721 --rc geninfo_all_blocks=1 00:10:08.721 --rc geninfo_unexecuted_blocks=1 00:10:08.721 00:10:08.721 ' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.721 --rc genhtml_branch_coverage=1 00:10:08.721 --rc genhtml_function_coverage=1 00:10:08.721 --rc genhtml_legend=1 00:10:08.721 --rc geninfo_all_blocks=1 00:10:08.721 --rc geninfo_unexecuted_blocks=1 00:10:08.721 00:10:08.721 ' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:08.721 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:08.722 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:08.722 #define SPDK_CONFIG_H 00:10:08.722 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:08.722 #define SPDK_CONFIG_APPS 1 00:10:08.722 #define SPDK_CONFIG_ARCH native 00:10:08.722 #undef SPDK_CONFIG_ASAN 00:10:08.722 #undef SPDK_CONFIG_AVAHI 00:10:08.722 #undef SPDK_CONFIG_CET 00:10:08.722 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:08.722 #define SPDK_CONFIG_COVERAGE 1 00:10:08.722 #define SPDK_CONFIG_CROSS_PREFIX 00:10:08.722 #undef SPDK_CONFIG_CRYPTO 00:10:08.722 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:08.722 #undef SPDK_CONFIG_CUSTOMOCF 00:10:08.722 #undef SPDK_CONFIG_DAOS 00:10:08.722 #define SPDK_CONFIG_DAOS_DIR 00:10:08.722 #define SPDK_CONFIG_DEBUG 1 00:10:08.722 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:08.723 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:08.723 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:08.723 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:08.723 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:08.723 #undef SPDK_CONFIG_DPDK_UADK 00:10:08.723 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:08.723 #define SPDK_CONFIG_EXAMPLES 1 00:10:08.723 #undef SPDK_CONFIG_FC 00:10:08.723 #define SPDK_CONFIG_FC_PATH 00:10:08.723 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:08.723 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:08.723 #define SPDK_CONFIG_FSDEV 1 00:10:08.723 #undef SPDK_CONFIG_FUSE 00:10:08.723 #undef SPDK_CONFIG_FUZZER 00:10:08.723 #define SPDK_CONFIG_FUZZER_LIB 00:10:08.723 #undef SPDK_CONFIG_GOLANG 00:10:08.723 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:08.723 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:08.723 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:08.723 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:08.723 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:08.723 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:08.723 #undef SPDK_CONFIG_HAVE_LZ4 00:10:08.723 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:08.723 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:08.723 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:08.723 #define SPDK_CONFIG_IDXD 1 00:10:08.723 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:08.723 #undef SPDK_CONFIG_IPSEC_MB 00:10:08.723 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:08.723 #define SPDK_CONFIG_ISAL 1 00:10:08.723 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:08.723 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:08.723 #define SPDK_CONFIG_LIBDIR 00:10:08.723 #undef SPDK_CONFIG_LTO 00:10:08.723 #define SPDK_CONFIG_MAX_LCORES 128 00:10:08.723 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:08.723 #define SPDK_CONFIG_NVME_CUSE 1 00:10:08.723 #undef SPDK_CONFIG_OCF 00:10:08.723 #define SPDK_CONFIG_OCF_PATH 00:10:08.723 #define SPDK_CONFIG_OPENSSL_PATH 00:10:08.723 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:08.723 #define SPDK_CONFIG_PGO_DIR 00:10:08.723 #undef SPDK_CONFIG_PGO_USE 00:10:08.723 #define SPDK_CONFIG_PREFIX /usr/local 00:10:08.723 #undef SPDK_CONFIG_RAID5F 00:10:08.723 #undef SPDK_CONFIG_RBD 00:10:08.723 #define SPDK_CONFIG_RDMA 1 00:10:08.723 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:08.723 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:08.723 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:08.723 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:08.723 #define SPDK_CONFIG_SHARED 1 00:10:08.723 #undef SPDK_CONFIG_SMA 00:10:08.723 #define SPDK_CONFIG_TESTS 1 00:10:08.723 #undef SPDK_CONFIG_TSAN 00:10:08.723 #define SPDK_CONFIG_UBLK 1 00:10:08.723 #define SPDK_CONFIG_UBSAN 1 00:10:08.723 #undef SPDK_CONFIG_UNIT_TESTS 00:10:08.723 #undef SPDK_CONFIG_URING 00:10:08.723 #define SPDK_CONFIG_URING_PATH 00:10:08.723 #undef SPDK_CONFIG_URING_ZNS 00:10:08.723 #undef SPDK_CONFIG_USDT 00:10:08.723 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:08.723 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:08.723 #define SPDK_CONFIG_VFIO_USER 1 00:10:08.723 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:08.723 #define SPDK_CONFIG_VHOST 1 00:10:08.723 #define SPDK_CONFIG_VIRTIO 1 00:10:08.723 #undef SPDK_CONFIG_VTUNE 00:10:08.723 #define SPDK_CONFIG_VTUNE_DIR 00:10:08.723 #define SPDK_CONFIG_WERROR 1 00:10:08.723 #define SPDK_CONFIG_WPDK_DIR 00:10:08.723 #undef SPDK_CONFIG_XNVME 00:10:08.723 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:08.723 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.724 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:08.725 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2555398 ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2555398 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZquqPr 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZquqPr/tests/target /tmp/spdk.ZquqPr 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189122650112 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6841323520 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23048192 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981349888 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=638976 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:08.726 * Looking for test storage... 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189122650112 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9055916032 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:08.726 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.727 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.987 --rc genhtml_branch_coverage=1 00:10:08.987 --rc genhtml_function_coverage=1 00:10:08.987 --rc genhtml_legend=1 00:10:08.987 --rc geninfo_all_blocks=1 00:10:08.987 --rc geninfo_unexecuted_blocks=1 00:10:08.987 00:10:08.987 ' 00:10:08.987 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.987 --rc genhtml_branch_coverage=1 00:10:08.987 --rc genhtml_function_coverage=1 00:10:08.987 --rc genhtml_legend=1 00:10:08.987 --rc geninfo_all_blocks=1 00:10:08.988 --rc geninfo_unexecuted_blocks=1 00:10:08.988 00:10:08.988 ' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.988 --rc genhtml_branch_coverage=1 00:10:08.988 --rc genhtml_function_coverage=1 00:10:08.988 --rc genhtml_legend=1 00:10:08.988 --rc geninfo_all_blocks=1 00:10:08.988 --rc geninfo_unexecuted_blocks=1 00:10:08.988 00:10:08.988 ' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.988 --rc genhtml_branch_coverage=1 00:10:08.988 --rc genhtml_function_coverage=1 00:10:08.988 --rc genhtml_legend=1 00:10:08.988 --rc geninfo_all_blocks=1 00:10:08.988 --rc geninfo_unexecuted_blocks=1 00:10:08.988 00:10:08.988 ' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.988 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.989 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.989 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.562 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:15.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:15.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:15.563 Found net devices under 0000:86:00.0: cvl_0_0 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:15.563 Found net devices under 0000:86:00.1: cvl_0_1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:10:15.563 00:10:15.563 --- 10.0.0.2 ping statistics --- 00:10:15.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.563 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:10:15.563 00:10:15.563 --- 10.0.0.1 ping statistics --- 00:10:15.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.563 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:15.563 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.564 ************************************ 00:10:15.564 START TEST nvmf_filesystem_no_in_capsule 00:10:15.564 ************************************ 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2558659 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2558659 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2558659 ']' 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.564 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.564 [2024-11-20 09:48:48.494425] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:10:15.564 [2024-11-20 09:48:48.494465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.564 [2024-11-20 09:48:48.571219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.564 [2024-11-20 09:48:48.612847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.564 [2024-11-20 09:48:48.612884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.564 [2024-11-20 09:48:48.612891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.564 [2024-11-20 09:48:48.612898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.564 [2024-11-20 09:48:48.612903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.564 [2024-11-20 09:48:48.614519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.564 [2024-11-20 09:48:48.614626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.564 [2024-11-20 09:48:48.614654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.564 [2024-11-20 09:48:48.614656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.822 [2024-11-20 09:48:49.364712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.822 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.079 Malloc1 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.079 [2024-11-20 09:48:49.511416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:16.079 { 00:10:16.079 "name": "Malloc1", 00:10:16.079 "aliases": [ 00:10:16.079 "c668cd4e-8769-46ab-b485-ca2ac9f8c974" 00:10:16.079 ], 00:10:16.079 "product_name": "Malloc disk", 00:10:16.079 "block_size": 512, 00:10:16.079 "num_blocks": 1048576, 00:10:16.079 "uuid": "c668cd4e-8769-46ab-b485-ca2ac9f8c974", 00:10:16.079 "assigned_rate_limits": { 00:10:16.079 "rw_ios_per_sec": 0, 00:10:16.079 "rw_mbytes_per_sec": 0, 00:10:16.079 "r_mbytes_per_sec": 0, 00:10:16.079 "w_mbytes_per_sec": 0 00:10:16.079 }, 00:10:16.079 "claimed": true, 00:10:16.079 "claim_type": "exclusive_write", 00:10:16.079 "zoned": false, 00:10:16.079 "supported_io_types": { 00:10:16.079 "read": true, 00:10:16.079 "write": true, 00:10:16.079 "unmap": true, 00:10:16.079 "flush": true, 00:10:16.079 "reset": true, 00:10:16.079 "nvme_admin": false, 00:10:16.079 "nvme_io": false, 00:10:16.079 "nvme_io_md": false, 00:10:16.079 "write_zeroes": true, 00:10:16.079 "zcopy": true, 00:10:16.079 "get_zone_info": false, 00:10:16.079 "zone_management": false, 00:10:16.079 "zone_append": false, 00:10:16.079 "compare": false, 00:10:16.079 "compare_and_write": false, 00:10:16.079 "abort": true, 00:10:16.079 "seek_hole": false, 00:10:16.079 "seek_data": false, 00:10:16.079 "copy": true, 00:10:16.079 "nvme_iov_md": false 00:10:16.079 }, 00:10:16.079 "memory_domains": [ 00:10:16.079 { 00:10:16.079 "dma_device_id": "system", 00:10:16.079 "dma_device_type": 1 00:10:16.079 }, 00:10:16.079 { 00:10:16.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.079 "dma_device_type": 2 00:10:16.079 } 00:10:16.079 ], 00:10:16.079 "driver_specific": {} 00:10:16.079 } 00:10:16.079 ]' 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:16.079 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:16.080 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:16.080 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:16.080 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:16.080 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.080 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.451 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.451 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:17.451 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.451 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:17.451 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:19.349 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.350 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.350 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.350 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.607 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:19.607 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.979 ************************************ 00:10:20.979 START TEST filesystem_ext4 00:10:20.979 ************************************ 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:20.979 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:20.979 mke2fs 1.47.0 (5-Feb-2023) 00:10:20.979 Discarding device blocks: 0/522240 done 00:10:20.979 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:20.979 Filesystem UUID: 70e1e8e7-ef45-4174-b451-d29615ff3ae2 00:10:20.979 Superblock backups stored on blocks: 00:10:20.979 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:20.979 00:10:20.979 Allocating group tables: 0/64 done 00:10:20.979 Writing inode tables: 0/64 done 00:10:20.979 Creating journal (8192 blocks): done 00:10:23.173 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:23.173 00:10:23.173 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:23.173 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2558659 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:29.725 00:10:29.725 real 0m8.420s 00:10:29.725 user 0m0.035s 00:10:29.725 sys 0m0.065s 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:29.725 ************************************ 00:10:29.725 END TEST filesystem_ext4 00:10:29.725 ************************************ 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.725 ************************************ 00:10:29.725 START TEST filesystem_btrfs 00:10:29.725 ************************************ 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.725 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:29.726 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:29.726 btrfs-progs v6.8.1 00:10:29.726 See https://btrfs.readthedocs.io for more information. 00:10:29.726 00:10:29.726 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:29.726 NOTE: several default settings have changed in version 5.15, please make sure 00:10:29.726 this does not affect your deployments: 00:10:29.726 - DUP for metadata (-m dup) 00:10:29.726 - enabled no-holes (-O no-holes) 00:10:29.726 - enabled free-space-tree (-R free-space-tree) 00:10:29.726 00:10:29.726 Label: (null) 00:10:29.726 UUID: 99aaf34b-b16e-4dd6-9fce-1a331e4d1473 00:10:29.726 Node size: 16384 00:10:29.726 Sector size: 4096 (CPU page size: 4096) 00:10:29.726 Filesystem size: 510.00MiB 00:10:29.726 Block group profiles: 00:10:29.726 Data: single 8.00MiB 00:10:29.726 Metadata: DUP 32.00MiB 00:10:29.726 System: DUP 8.00MiB 00:10:29.726 SSD detected: yes 00:10:29.726 Zoned device: no 00:10:29.726 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:29.726 Checksum: crc32c 00:10:29.726 Number of devices: 1 00:10:29.726 Devices: 00:10:29.726 ID SIZE PATH 00:10:29.726 1 510.00MiB /dev/nvme0n1p1 00:10:29.726 00:10:29.726 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:29.726 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2558659 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.658 00:10:30.658 real 0m1.321s 00:10:30.658 user 0m0.019s 00:10:30.658 sys 0m0.120s 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.658 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.658 ************************************ 00:10:30.658 END TEST filesystem_btrfs 00:10:30.658 ************************************ 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.658 ************************************ 00:10:30.658 START TEST filesystem_xfs 00:10:30.658 ************************************ 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:30.658 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:31.224 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:31.224 = sectsz=512 attr=2, projid32bit=1 00:10:31.224 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:31.224 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:31.224 data = bsize=4096 blocks=130560, imaxpct=25 00:10:31.224 = sunit=0 swidth=0 blks 00:10:31.224 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:31.224 log =internal log bsize=4096 blocks=16384, version=2 00:10:31.224 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:31.224 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:32.155 Discarding blocks...Done. 00:10:32.155 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:32.155 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.684 00:10:34.684 real 0m3.659s 00:10:34.684 user 0m0.026s 00:10:34.684 sys 0m0.071s 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 ************************************ 00:10:34.684 END TEST filesystem_xfs 00:10:34.684 ************************************ 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2558659 ']' 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558659' 00:10:34.684 killing process with pid 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2558659 00:10:34.684 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2558659 00:10:34.684 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:34.684 00:10:34.684 real 0m19.829s 00:10:34.684 user 1m18.254s 00:10:34.684 sys 0m1.444s 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.944 ************************************ 00:10:34.944 END TEST nvmf_filesystem_no_in_capsule 00:10:34.944 ************************************ 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.944 ************************************ 00:10:34.944 START TEST nvmf_filesystem_in_capsule 00:10:34.944 ************************************ 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2562114 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2562114 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2562114 ']' 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.944 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.944 [2024-11-20 09:49:08.389841] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:10:34.944 [2024-11-20 09:49:08.389888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.944 [2024-11-20 09:49:08.471393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.944 [2024-11-20 09:49:08.512980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.944 [2024-11-20 09:49:08.513014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.944 [2024-11-20 09:49:08.513022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.944 [2024-11-20 09:49:08.513028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.944 [2024-11-20 09:49:08.513033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.944 [2024-11-20 09:49:08.514439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.944 [2024-11-20 09:49:08.514482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.944 [2024-11-20 09:49:08.514587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.944 [2024-11-20 09:49:08.514588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 [2024-11-20 09:49:09.277122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 [2024-11-20 09:49:09.423952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:35.878 { 00:10:35.878 "name": "Malloc1", 00:10:35.878 "aliases": [ 00:10:35.878 "38ecdf50-380e-4e6a-ac8e-e3324a30a2b4" 00:10:35.878 ], 00:10:35.878 "product_name": "Malloc disk", 00:10:35.878 "block_size": 512, 00:10:35.878 "num_blocks": 1048576, 00:10:35.878 "uuid": "38ecdf50-380e-4e6a-ac8e-e3324a30a2b4", 00:10:35.878 "assigned_rate_limits": { 00:10:35.878 "rw_ios_per_sec": 0, 00:10:35.878 "rw_mbytes_per_sec": 0, 00:10:35.878 "r_mbytes_per_sec": 0, 00:10:35.878 "w_mbytes_per_sec": 0 00:10:35.878 }, 00:10:35.878 "claimed": true, 00:10:35.878 "claim_type": "exclusive_write", 00:10:35.878 "zoned": false, 00:10:35.878 "supported_io_types": { 00:10:35.878 "read": true, 00:10:35.878 "write": true, 00:10:35.878 "unmap": true, 00:10:35.878 "flush": true, 00:10:35.878 "reset": true, 00:10:35.878 "nvme_admin": false, 00:10:35.878 "nvme_io": false, 00:10:35.878 "nvme_io_md": false, 00:10:35.878 "write_zeroes": true, 00:10:35.878 "zcopy": true, 00:10:35.878 "get_zone_info": false, 00:10:35.878 "zone_management": false, 00:10:35.878 "zone_append": false, 00:10:35.878 "compare": false, 00:10:35.878 "compare_and_write": false, 00:10:35.878 "abort": true, 00:10:35.878 "seek_hole": false, 00:10:35.878 "seek_data": false, 00:10:35.878 "copy": true, 00:10:35.878 "nvme_iov_md": false 00:10:35.878 }, 00:10:35.878 "memory_domains": [ 00:10:35.878 { 00:10:35.878 "dma_device_id": "system", 00:10:35.878 "dma_device_type": 1 00:10:35.878 }, 00:10:35.878 { 00:10:35.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.878 "dma_device_type": 2 00:10:35.878 } 00:10:35.878 ], 00:10:35.878 "driver_specific": {} 00:10:35.878 } 00:10:35.878 ]' 00:10:35.878 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:36.136 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.510 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.510 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:37.510 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.510 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:37.510 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:39.408 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:39.974 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.905 ************************************ 00:10:40.905 START TEST filesystem_in_capsule_ext4 00:10:40.905 ************************************ 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:40.905 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:40.905 mke2fs 1.47.0 (5-Feb-2023) 00:10:41.161 Discarding device blocks: 0/522240 done 00:10:41.161 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:41.161 Filesystem UUID: 3e520858-5ec5-4a00-8467-36bdb38c398e 00:10:41.161 Superblock backups stored on blocks: 00:10:41.161 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:41.161 00:10:41.161 Allocating group tables: 0/64 done 00:10:41.161 Writing inode tables: 0/64 done 00:10:42.528 Creating journal (8192 blocks): done 00:10:42.528 Writing superblocks and filesystem accounting information: 0/64 done 00:10:42.528 00:10:42.528 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:42.528 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2562114 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.183 00:10:49.183 real 0m7.514s 00:10:49.183 user 0m0.033s 00:10:49.183 sys 0m0.064s 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.183 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:49.183 ************************************ 00:10:49.183 END TEST filesystem_in_capsule_ext4 00:10:49.183 ************************************ 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.183 ************************************ 00:10:49.183 START TEST filesystem_in_capsule_btrfs 00:10:49.183 ************************************ 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:49.183 btrfs-progs v6.8.1 00:10:49.183 See https://btrfs.readthedocs.io for more information. 00:10:49.183 00:10:49.183 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:49.183 NOTE: several default settings have changed in version 5.15, please make sure 00:10:49.183 this does not affect your deployments: 00:10:49.183 - DUP for metadata (-m dup) 00:10:49.183 - enabled no-holes (-O no-holes) 00:10:49.183 - enabled free-space-tree (-R free-space-tree) 00:10:49.183 00:10:49.183 Label: (null) 00:10:49.183 UUID: 97151897-2892-4f0e-83dd-662bc8b65955 00:10:49.183 Node size: 16384 00:10:49.183 Sector size: 4096 (CPU page size: 4096) 00:10:49.183 Filesystem size: 510.00MiB 00:10:49.183 Block group profiles: 00:10:49.183 Data: single 8.00MiB 00:10:49.183 Metadata: DUP 32.00MiB 00:10:49.183 System: DUP 8.00MiB 00:10:49.183 SSD detected: yes 00:10:49.183 Zoned device: no 00:10:49.183 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:49.183 Checksum: crc32c 00:10:49.183 Number of devices: 1 00:10:49.183 Devices: 00:10:49.183 ID SIZE PATH 00:10:49.183 1 510.00MiB /dev/nvme0n1p1 00:10:49.183 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:49.183 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:49.441 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2562114 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.698 00:10:49.698 real 0m1.002s 00:10:49.698 user 0m0.027s 00:10:49.698 sys 0m0.117s 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:49.698 ************************************ 00:10:49.698 END TEST filesystem_in_capsule_btrfs 00:10:49.698 ************************************ 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.698 ************************************ 00:10:49.698 START TEST filesystem_in_capsule_xfs 00:10:49.698 ************************************ 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:49.698 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:49.698 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:49.698 = sectsz=512 attr=2, projid32bit=1 00:10:49.698 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:49.698 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:49.698 data = bsize=4096 blocks=130560, imaxpct=25 00:10:49.698 = sunit=0 swidth=0 blks 00:10:49.698 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:49.698 log =internal log bsize=4096 blocks=16384, version=2 00:10:49.698 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:49.698 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:50.629 Discarding blocks...Done. 00:10:50.629 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:50.629 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2562114 00:10:53.153 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.154 00:10:53.154 real 0m3.420s 00:10:53.154 user 0m0.024s 00:10:53.154 sys 0m0.075s 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.154 ************************************ 00:10:53.154 END TEST filesystem_in_capsule_xfs 00:10:53.154 ************************************ 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:53.154 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2562114 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2562114 ']' 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2562114 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562114 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562114' 00:10:53.412 killing process with pid 2562114 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2562114 00:10:53.412 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2562114 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:53.671 00:10:53.671 real 0m18.791s 00:10:53.671 user 1m14.130s 00:10:53.671 sys 0m1.476s 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.671 ************************************ 00:10:53.671 END TEST nvmf_filesystem_in_capsule 00:10:53.671 ************************************ 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:53.671 rmmod nvme_tcp 00:10:53.671 rmmod nvme_fabrics 00:10:53.671 rmmod nvme_keyring 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.671 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.208 00:10:56.208 real 0m47.416s 00:10:56.208 user 2m34.494s 00:10:56.208 sys 0m7.635s 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.208 ************************************ 00:10:56.208 END TEST nvmf_filesystem 00:10:56.208 ************************************ 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.208 ************************************ 00:10:56.208 START TEST nvmf_target_discovery 00:10:56.208 ************************************ 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:56.208 * Looking for test storage... 00:10:56.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:56.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.208 --rc genhtml_branch_coverage=1 00:10:56.208 --rc genhtml_function_coverage=1 00:10:56.208 --rc genhtml_legend=1 00:10:56.208 --rc geninfo_all_blocks=1 00:10:56.208 --rc geninfo_unexecuted_blocks=1 00:10:56.208 00:10:56.208 ' 00:10:56.208 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:56.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.208 --rc genhtml_branch_coverage=1 00:10:56.208 --rc genhtml_function_coverage=1 00:10:56.208 --rc genhtml_legend=1 00:10:56.208 --rc geninfo_all_blocks=1 00:10:56.208 --rc geninfo_unexecuted_blocks=1 00:10:56.208 00:10:56.208 ' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.209 --rc genhtml_branch_coverage=1 00:10:56.209 --rc genhtml_function_coverage=1 00:10:56.209 --rc genhtml_legend=1 00:10:56.209 --rc geninfo_all_blocks=1 00:10:56.209 --rc geninfo_unexecuted_blocks=1 00:10:56.209 00:10:56.209 ' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.209 --rc genhtml_branch_coverage=1 00:10:56.209 --rc genhtml_function_coverage=1 00:10:56.209 --rc genhtml_legend=1 00:10:56.209 --rc geninfo_all_blocks=1 00:10:56.209 --rc geninfo_unexecuted_blocks=1 00:10:56.209 00:10:56.209 ' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.209 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.782 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.783 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.783 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:02.783 00:11:02.783 --- 10.0.0.2 ping statistics --- 00:11:02.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.783 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:02.783 00:11:02.783 --- 10.0.0.1 ping statistics --- 00:11:02.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.783 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2568869 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2568869 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2568869 ']' 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.783 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.783 [2024-11-20 09:49:35.677843] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:11:02.783 [2024-11-20 09:49:35.677897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.783 [2024-11-20 09:49:35.758074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.783 [2024-11-20 09:49:35.798601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.783 [2024-11-20 09:49:35.798635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.783 [2024-11-20 09:49:35.798642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.783 [2024-11-20 09:49:35.798648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.783 [2024-11-20 09:49:35.798652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.783 [2024-11-20 09:49:35.800110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.783 [2024-11-20 09:49:35.800242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.783 [2024-11-20 09:49:35.800304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.783 [2024-11-20 09:49:35.800310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 [2024-11-20 09:49:36.558969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 Null1 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.042 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.043 [2024-11-20 09:49:36.608291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.043 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.043 Null2 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 Null3 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 Null4 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:03.561 00:11:03.561 Discovery Log Number of Records 6, Generation counter 6 00:11:03.561 =====Discovery Log Entry 0====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: current discovery subsystem 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4420 00:11:03.561 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: explicit discovery connections, duplicate discovery information 00:11:03.561 sectype: none 00:11:03.561 =====Discovery Log Entry 1====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: nvme subsystem 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4420 00:11:03.561 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: none 00:11:03.561 sectype: none 00:11:03.561 =====Discovery Log Entry 2====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: nvme subsystem 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4420 00:11:03.561 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: none 00:11:03.561 sectype: none 00:11:03.561 =====Discovery Log Entry 3====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: nvme subsystem 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4420 00:11:03.561 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: none 00:11:03.561 sectype: none 00:11:03.561 =====Discovery Log Entry 4====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: nvme subsystem 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4420 00:11:03.561 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: none 00:11:03.561 sectype: none 00:11:03.561 =====Discovery Log Entry 5====== 00:11:03.561 trtype: tcp 00:11:03.561 adrfam: ipv4 00:11:03.561 subtype: discovery subsystem referral 00:11:03.561 treq: not required 00:11:03.561 portid: 0 00:11:03.561 trsvcid: 4430 00:11:03.561 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:03.561 traddr: 10.0.0.2 00:11:03.561 eflags: none 00:11:03.561 sectype: none 00:11:03.561 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:03.561 Perform nvmf subsystem discovery via RPC 00:11:03.561 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:03.561 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.561 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.561 [ 00:11:03.561 { 00:11:03.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:03.561 "subtype": "Discovery", 00:11:03.561 "listen_addresses": [ 00:11:03.561 { 00:11:03.561 "trtype": "TCP", 00:11:03.561 "adrfam": "IPv4", 00:11:03.561 "traddr": "10.0.0.2", 00:11:03.561 "trsvcid": "4420" 00:11:03.561 } 00:11:03.561 ], 00:11:03.561 "allow_any_host": true, 00:11:03.561 "hosts": [] 00:11:03.561 }, 00:11:03.561 { 00:11:03.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:03.561 "subtype": "NVMe", 00:11:03.561 "listen_addresses": [ 00:11:03.561 { 00:11:03.561 "trtype": "TCP", 00:11:03.561 "adrfam": "IPv4", 00:11:03.561 "traddr": "10.0.0.2", 00:11:03.561 "trsvcid": "4420" 00:11:03.561 } 00:11:03.561 ], 00:11:03.561 "allow_any_host": true, 00:11:03.561 "hosts": [], 00:11:03.561 "serial_number": "SPDK00000000000001", 00:11:03.561 "model_number": "SPDK bdev Controller", 00:11:03.561 "max_namespaces": 32, 00:11:03.561 "min_cntlid": 1, 00:11:03.561 "max_cntlid": 65519, 00:11:03.561 "namespaces": [ 00:11:03.561 { 00:11:03.561 "nsid": 1, 00:11:03.561 "bdev_name": "Null1", 00:11:03.561 "name": "Null1", 00:11:03.561 "nguid": "E1CDDAA2E8604557A0FA9C5A3E206717", 00:11:03.561 "uuid": "e1cddaa2-e860-4557-a0fa-9c5a3e206717" 00:11:03.562 } 00:11:03.562 ] 00:11:03.562 }, 00:11:03.562 { 00:11:03.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:03.562 "subtype": "NVMe", 00:11:03.562 "listen_addresses": [ 00:11:03.562 { 00:11:03.562 "trtype": "TCP", 00:11:03.562 "adrfam": "IPv4", 00:11:03.562 "traddr": "10.0.0.2", 00:11:03.562 "trsvcid": "4420" 00:11:03.562 } 00:11:03.562 ], 00:11:03.562 "allow_any_host": true, 00:11:03.562 "hosts": [], 00:11:03.562 "serial_number": "SPDK00000000000002", 00:11:03.562 "model_number": "SPDK bdev Controller", 00:11:03.562 "max_namespaces": 32, 00:11:03.562 "min_cntlid": 1, 00:11:03.562 "max_cntlid": 65519, 00:11:03.562 "namespaces": [ 00:11:03.562 { 00:11:03.562 "nsid": 1, 00:11:03.562 "bdev_name": "Null2", 00:11:03.562 "name": "Null2", 00:11:03.562 "nguid": "B5F6F3C949294A978E11125EEA6C2019", 00:11:03.562 "uuid": "b5f6f3c9-4929-4a97-8e11-125eea6c2019" 00:11:03.562 } 00:11:03.562 ] 00:11:03.562 }, 00:11:03.562 { 00:11:03.562 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:03.562 "subtype": "NVMe", 00:11:03.562 "listen_addresses": [ 00:11:03.562 { 00:11:03.562 "trtype": "TCP", 00:11:03.562 "adrfam": "IPv4", 00:11:03.562 "traddr": "10.0.0.2", 00:11:03.562 "trsvcid": "4420" 00:11:03.562 } 00:11:03.562 ], 00:11:03.562 "allow_any_host": true, 00:11:03.562 "hosts": [], 00:11:03.562 "serial_number": "SPDK00000000000003", 00:11:03.562 "model_number": "SPDK bdev Controller", 00:11:03.562 "max_namespaces": 32, 00:11:03.562 "min_cntlid": 1, 00:11:03.562 "max_cntlid": 65519, 00:11:03.562 "namespaces": [ 00:11:03.562 { 00:11:03.562 "nsid": 1, 00:11:03.562 "bdev_name": "Null3", 00:11:03.562 "name": "Null3", 00:11:03.562 "nguid": "94BA581F4AAF4A31BC51156C1313E59A", 00:11:03.562 "uuid": "94ba581f-4aaf-4a31-bc51-156c1313e59a" 00:11:03.562 } 00:11:03.562 ] 00:11:03.562 }, 00:11:03.562 { 00:11:03.562 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:03.562 "subtype": "NVMe", 00:11:03.562 "listen_addresses": [ 00:11:03.562 { 00:11:03.562 "trtype": "TCP", 00:11:03.562 "adrfam": "IPv4", 00:11:03.562 "traddr": "10.0.0.2", 00:11:03.562 "trsvcid": "4420" 00:11:03.562 } 00:11:03.562 ], 00:11:03.562 "allow_any_host": true, 00:11:03.562 "hosts": [], 00:11:03.562 "serial_number": "SPDK00000000000004", 00:11:03.562 "model_number": "SPDK bdev Controller", 00:11:03.562 "max_namespaces": 32, 00:11:03.562 "min_cntlid": 1, 00:11:03.562 "max_cntlid": 65519, 00:11:03.562 "namespaces": [ 00:11:03.562 { 00:11:03.562 "nsid": 1, 00:11:03.562 "bdev_name": "Null4", 00:11:03.562 "name": "Null4", 00:11:03.562 "nguid": "D41BEDC55C6C4825AF23D2BC5B408913", 00:11:03.562 "uuid": "d41bedc5-5c6c-4825-af23-d2bc5b408913" 00:11:03.562 } 00:11:03.562 ] 00:11:03.562 } 00:11:03.562 ] 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.562 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.562 rmmod nvme_tcp 00:11:03.562 rmmod nvme_fabrics 00:11:03.562 rmmod nvme_keyring 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2568869 ']' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2568869 ']' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568869' 00:11:03.822 killing process with pid 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2568869 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.822 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.370 00:11:06.370 real 0m10.067s 00:11:06.370 user 0m8.285s 00:11:06.370 sys 0m4.952s 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.370 ************************************ 00:11:06.370 END TEST nvmf_target_discovery 00:11:06.370 ************************************ 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.370 ************************************ 00:11:06.370 START TEST nvmf_referrals 00:11:06.370 ************************************ 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:06.370 * Looking for test storage... 00:11:06.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.370 --rc genhtml_branch_coverage=1 00:11:06.370 --rc genhtml_function_coverage=1 00:11:06.370 --rc genhtml_legend=1 00:11:06.370 --rc geninfo_all_blocks=1 00:11:06.370 --rc geninfo_unexecuted_blocks=1 00:11:06.370 00:11:06.370 ' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.370 --rc genhtml_branch_coverage=1 00:11:06.370 --rc genhtml_function_coverage=1 00:11:06.370 --rc genhtml_legend=1 00:11:06.370 --rc geninfo_all_blocks=1 00:11:06.370 --rc geninfo_unexecuted_blocks=1 00:11:06.370 00:11:06.370 ' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.370 --rc genhtml_branch_coverage=1 00:11:06.370 --rc genhtml_function_coverage=1 00:11:06.370 --rc genhtml_legend=1 00:11:06.370 --rc geninfo_all_blocks=1 00:11:06.370 --rc geninfo_unexecuted_blocks=1 00:11:06.370 00:11:06.370 ' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.370 --rc genhtml_branch_coverage=1 00:11:06.370 --rc genhtml_function_coverage=1 00:11:06.370 --rc genhtml_legend=1 00:11:06.370 --rc geninfo_all_blocks=1 00:11:06.370 --rc geninfo_unexecuted_blocks=1 00:11:06.370 00:11:06.370 ' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.370 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.371 09:49:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:12.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:12.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:12.935 Found net devices under 0000:86:00.0: cvl_0_0 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.935 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:12.936 Found net devices under 0000:86:00.1: cvl_0_1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:12.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:11:12.936 00:11:12.936 --- 10.0.0.2 ping statistics --- 00:11:12.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.936 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:11:12.936 00:11:12.936 --- 10.0.0.1 ping statistics --- 00:11:12.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.936 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2572656 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2572656 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2572656 ']' 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.936 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.936 [2024-11-20 09:49:45.797426] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:11:12.936 [2024-11-20 09:49:45.797473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.936 [2024-11-20 09:49:45.878132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.936 [2024-11-20 09:49:45.920954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.936 [2024-11-20 09:49:45.920992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.936 [2024-11-20 09:49:45.920999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.936 [2024-11-20 09:49:45.921005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.936 [2024-11-20 09:49:45.921010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.936 [2024-11-20 09:49:45.922583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.936 [2024-11-20 09:49:45.922695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.936 [2024-11-20 09:49:45.922802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.936 [2024-11-20 09:49:45.922804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.194 [2024-11-20 09:49:46.677789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.194 [2024-11-20 09:49:46.691062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.194 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:13.195 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.451 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.451 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:13.451 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:13.451 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:13.451 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.452 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:13.452 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:13.709 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:13.971 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:14.229 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:14.485 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.485 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.742 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.000 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.259 rmmod nvme_tcp 00:11:15.259 rmmod nvme_fabrics 00:11:15.259 rmmod nvme_keyring 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2572656 ']' 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2572656 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2572656 ']' 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2572656 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572656 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572656' 00:11:15.259 killing process with pid 2572656 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2572656 00:11:15.259 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2572656 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.519 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.531 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.531 00:11:17.531 real 0m11.475s 00:11:17.531 user 0m14.591s 00:11:17.531 sys 0m5.322s 00:11:17.531 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.531 09:49:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.531 ************************************ 00:11:17.531 END TEST nvmf_referrals 00:11:17.531 ************************************ 00:11:17.531 09:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:17.531 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.531 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.531 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.531 ************************************ 00:11:17.531 START TEST nvmf_connect_disconnect 00:11:17.531 ************************************ 00:11:17.531 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:17.791 * Looking for test storage... 00:11:17.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.791 --rc genhtml_branch_coverage=1 00:11:17.791 --rc genhtml_function_coverage=1 00:11:17.791 --rc genhtml_legend=1 00:11:17.791 --rc geninfo_all_blocks=1 00:11:17.791 --rc geninfo_unexecuted_blocks=1 00:11:17.791 00:11:17.791 ' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.791 --rc genhtml_branch_coverage=1 00:11:17.791 --rc genhtml_function_coverage=1 00:11:17.791 --rc genhtml_legend=1 00:11:17.791 --rc geninfo_all_blocks=1 00:11:17.791 --rc geninfo_unexecuted_blocks=1 00:11:17.791 00:11:17.791 ' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.791 --rc genhtml_branch_coverage=1 00:11:17.791 --rc genhtml_function_coverage=1 00:11:17.791 --rc genhtml_legend=1 00:11:17.791 --rc geninfo_all_blocks=1 00:11:17.791 --rc geninfo_unexecuted_blocks=1 00:11:17.791 00:11:17.791 ' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.791 --rc genhtml_branch_coverage=1 00:11:17.791 --rc genhtml_function_coverage=1 00:11:17.791 --rc genhtml_legend=1 00:11:17.791 --rc geninfo_all_blocks=1 00:11:17.791 --rc geninfo_unexecuted_blocks=1 00:11:17.791 00:11:17.791 ' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.791 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.792 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.363 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.363 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.363 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.364 09:49:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:24.364 00:11:24.364 --- 10.0.0.2 ping statistics --- 00:11:24.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.364 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:24.364 00:11:24.364 --- 10.0.0.1 ping statistics --- 00:11:24.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.364 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2576843 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2576843 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2576843 ']' 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 [2024-11-20 09:49:57.278520] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:11:24.364 [2024-11-20 09:49:57.278569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.364 [2024-11-20 09:49:57.356617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.364 [2024-11-20 09:49:57.397424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.364 [2024-11-20 09:49:57.397462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.364 [2024-11-20 09:49:57.397468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.364 [2024-11-20 09:49:57.397474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.364 [2024-11-20 09:49:57.397478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.364 [2024-11-20 09:49:57.398916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.364 [2024-11-20 09:49:57.399027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.364 [2024-11-20 09:49:57.399132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.364 [2024-11-20 09:49:57.399134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 [2024-11-20 09:49:57.542671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:24.365 [2024-11-20 09:49:57.618486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:24.365 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:27.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.767 rmmod nvme_tcp 00:11:40.767 rmmod nvme_fabrics 00:11:40.767 rmmod nvme_keyring 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2576843 ']' 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2576843 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2576843 ']' 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2576843 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.767 09:50:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2576843 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2576843' 00:11:40.767 killing process with pid 2576843 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2576843 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2576843 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:40.767 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.768 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.768 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.768 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.768 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.768 09:50:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.305 00:11:43.305 real 0m25.227s 00:11:43.305 user 1m8.360s 00:11:43.305 sys 0m5.888s 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.305 ************************************ 00:11:43.305 END TEST nvmf_connect_disconnect 00:11:43.305 ************************************ 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.305 ************************************ 00:11:43.305 START TEST nvmf_multitarget 00:11:43.305 ************************************ 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:43.305 * Looking for test storage... 00:11:43.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.305 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.306 --rc genhtml_branch_coverage=1 00:11:43.306 --rc genhtml_function_coverage=1 00:11:43.306 --rc genhtml_legend=1 00:11:43.306 --rc geninfo_all_blocks=1 00:11:43.306 --rc geninfo_unexecuted_blocks=1 00:11:43.306 00:11:43.306 ' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.306 --rc genhtml_branch_coverage=1 00:11:43.306 --rc genhtml_function_coverage=1 00:11:43.306 --rc genhtml_legend=1 00:11:43.306 --rc geninfo_all_blocks=1 00:11:43.306 --rc geninfo_unexecuted_blocks=1 00:11:43.306 00:11:43.306 ' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.306 --rc genhtml_branch_coverage=1 00:11:43.306 --rc genhtml_function_coverage=1 00:11:43.306 --rc genhtml_legend=1 00:11:43.306 --rc geninfo_all_blocks=1 00:11:43.306 --rc geninfo_unexecuted_blocks=1 00:11:43.306 00:11:43.306 ' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:43.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.306 --rc genhtml_branch_coverage=1 00:11:43.306 --rc genhtml_function_coverage=1 00:11:43.306 --rc genhtml_legend=1 00:11:43.306 --rc geninfo_all_blocks=1 00:11:43.306 --rc geninfo_unexecuted_blocks=1 00:11:43.306 00:11:43.306 ' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.306 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.307 09:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:49.890 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:49.890 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:49.890 Found net devices under 0000:86:00.0: cvl_0_0 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:49.890 Found net devices under 0000:86:00.1: cvl_0_1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.890 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:11:49.891 00:11:49.891 --- 10.0.0.2 ping statistics --- 00:11:49.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.891 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:11:49.891 00:11:49.891 --- 10.0.0.1 ping statistics --- 00:11:49.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.891 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2583217 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2583217 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2583217 ']' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 [2024-11-20 09:50:22.593914] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:11:49.891 [2024-11-20 09:50:22.593957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.891 [2024-11-20 09:50:22.673368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.891 [2024-11-20 09:50:22.715581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.891 [2024-11-20 09:50:22.715618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.891 [2024-11-20 09:50:22.715625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.891 [2024-11-20 09:50:22.715631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.891 [2024-11-20 09:50:22.715636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.891 [2024-11-20 09:50:22.717224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.891 [2024-11-20 09:50:22.717337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.891 [2024-11-20 09:50:22.717338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.891 [2024-11-20 09:50:22.717294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:49.891 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:49.891 "nvmf_tgt_1" 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:49.891 "nvmf_tgt_2" 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:49.891 true 00:11:49.891 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:50.150 true 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.150 rmmod nvme_tcp 00:11:50.150 rmmod nvme_fabrics 00:11:50.150 rmmod nvme_keyring 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2583217 ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2583217 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2583217 ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2583217 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2583217 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2583217' 00:11:50.150 killing process with pid 2583217 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2583217 00:11:50.150 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2583217 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.410 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.959 00:11:52.959 real 0m9.602s 00:11:52.959 user 0m7.146s 00:11:52.959 sys 0m4.921s 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:52.959 ************************************ 00:11:52.959 END TEST nvmf_multitarget 00:11:52.959 ************************************ 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.959 09:50:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.959 ************************************ 00:11:52.959 START TEST nvmf_rpc 00:11:52.959 ************************************ 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:52.959 * Looking for test storage... 00:11:52.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.959 --rc genhtml_branch_coverage=1 00:11:52.959 --rc genhtml_function_coverage=1 00:11:52.959 --rc genhtml_legend=1 00:11:52.959 --rc geninfo_all_blocks=1 00:11:52.959 --rc geninfo_unexecuted_blocks=1 00:11:52.959 00:11:52.959 ' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.959 --rc genhtml_branch_coverage=1 00:11:52.959 --rc genhtml_function_coverage=1 00:11:52.959 --rc genhtml_legend=1 00:11:52.959 --rc geninfo_all_blocks=1 00:11:52.959 --rc geninfo_unexecuted_blocks=1 00:11:52.959 00:11:52.959 ' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.959 --rc genhtml_branch_coverage=1 00:11:52.959 --rc genhtml_function_coverage=1 00:11:52.959 --rc genhtml_legend=1 00:11:52.959 --rc geninfo_all_blocks=1 00:11:52.959 --rc geninfo_unexecuted_blocks=1 00:11:52.959 00:11:52.959 ' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.959 --rc genhtml_branch_coverage=1 00:11:52.959 --rc genhtml_function_coverage=1 00:11:52.959 --rc genhtml_legend=1 00:11:52.959 --rc geninfo_all_blocks=1 00:11:52.959 --rc geninfo_unexecuted_blocks=1 00:11:52.959 00:11:52.959 ' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.959 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.960 09:50:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:59.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:59.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:59.532 Found net devices under 0000:86:00.0: cvl_0_0 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:59.532 Found net devices under 0000:86:00.1: cvl_0_1 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.532 09:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.532 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.532 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.532 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.532 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:11:59.533 00:11:59.533 --- 10.0.0.2 ping statistics --- 00:11:59.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.533 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:11:59.533 00:11:59.533 --- 10.0.0.1 ping statistics --- 00:11:59.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.533 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2586926 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2586926 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2586926 ']' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 [2024-11-20 09:50:32.287128] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:11:59.533 [2024-11-20 09:50:32.287171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.533 [2024-11-20 09:50:32.365132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.533 [2024-11-20 09:50:32.407306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.533 [2024-11-20 09:50:32.407342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.533 [2024-11-20 09:50:32.407350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.533 [2024-11-20 09:50:32.407356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.533 [2024-11-20 09:50:32.407362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.533 [2024-11-20 09:50:32.408794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.533 [2024-11-20 09:50:32.408905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.533 [2024-11-20 09:50:32.408940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.533 [2024-11-20 09:50:32.408940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:59.533 "tick_rate": 2100000000, 00:11:59.533 "poll_groups": [ 00:11:59.533 { 00:11:59.533 "name": "nvmf_tgt_poll_group_000", 00:11:59.533 "admin_qpairs": 0, 00:11:59.533 "io_qpairs": 0, 00:11:59.533 "current_admin_qpairs": 0, 00:11:59.533 "current_io_qpairs": 0, 00:11:59.533 "pending_bdev_io": 0, 00:11:59.533 "completed_nvme_io": 0, 00:11:59.533 "transports": [] 00:11:59.533 }, 00:11:59.533 { 00:11:59.533 "name": "nvmf_tgt_poll_group_001", 00:11:59.533 "admin_qpairs": 0, 00:11:59.533 "io_qpairs": 0, 00:11:59.533 "current_admin_qpairs": 0, 00:11:59.533 "current_io_qpairs": 0, 00:11:59.533 "pending_bdev_io": 0, 00:11:59.533 "completed_nvme_io": 0, 00:11:59.533 "transports": [] 00:11:59.533 }, 00:11:59.533 { 00:11:59.533 "name": "nvmf_tgt_poll_group_002", 00:11:59.533 "admin_qpairs": 0, 00:11:59.533 "io_qpairs": 0, 00:11:59.533 "current_admin_qpairs": 0, 00:11:59.533 "current_io_qpairs": 0, 00:11:59.533 "pending_bdev_io": 0, 00:11:59.533 "completed_nvme_io": 0, 00:11:59.533 "transports": [] 00:11:59.533 }, 00:11:59.533 { 00:11:59.533 "name": "nvmf_tgt_poll_group_003", 00:11:59.533 "admin_qpairs": 0, 00:11:59.533 "io_qpairs": 0, 00:11:59.533 "current_admin_qpairs": 0, 00:11:59.533 "current_io_qpairs": 0, 00:11:59.533 "pending_bdev_io": 0, 00:11:59.533 "completed_nvme_io": 0, 00:11:59.533 "transports": [] 00:11:59.533 } 00:11:59.533 ] 00:11:59.533 }' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 [2024-11-20 09:50:32.650399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:59.534 "tick_rate": 2100000000, 00:11:59.534 "poll_groups": [ 00:11:59.534 { 00:11:59.534 "name": "nvmf_tgt_poll_group_000", 00:11:59.534 "admin_qpairs": 0, 00:11:59.534 "io_qpairs": 0, 00:11:59.534 "current_admin_qpairs": 0, 00:11:59.534 "current_io_qpairs": 0, 00:11:59.534 "pending_bdev_io": 0, 00:11:59.534 "completed_nvme_io": 0, 00:11:59.534 "transports": [ 00:11:59.534 { 00:11:59.534 "trtype": "TCP" 00:11:59.534 } 00:11:59.534 ] 00:11:59.534 }, 00:11:59.534 { 00:11:59.534 "name": "nvmf_tgt_poll_group_001", 00:11:59.534 "admin_qpairs": 0, 00:11:59.534 "io_qpairs": 0, 00:11:59.534 "current_admin_qpairs": 0, 00:11:59.534 "current_io_qpairs": 0, 00:11:59.534 "pending_bdev_io": 0, 00:11:59.534 "completed_nvme_io": 0, 00:11:59.534 "transports": [ 00:11:59.534 { 00:11:59.534 "trtype": "TCP" 00:11:59.534 } 00:11:59.534 ] 00:11:59.534 }, 00:11:59.534 { 00:11:59.534 "name": "nvmf_tgt_poll_group_002", 00:11:59.534 "admin_qpairs": 0, 00:11:59.534 "io_qpairs": 0, 00:11:59.534 "current_admin_qpairs": 0, 00:11:59.534 "current_io_qpairs": 0, 00:11:59.534 "pending_bdev_io": 0, 00:11:59.534 "completed_nvme_io": 0, 00:11:59.534 "transports": [ 00:11:59.534 { 00:11:59.534 "trtype": "TCP" 00:11:59.534 } 00:11:59.534 ] 00:11:59.534 }, 00:11:59.534 { 00:11:59.534 "name": "nvmf_tgt_poll_group_003", 00:11:59.534 "admin_qpairs": 0, 00:11:59.534 "io_qpairs": 0, 00:11:59.534 "current_admin_qpairs": 0, 00:11:59.534 "current_io_qpairs": 0, 00:11:59.534 "pending_bdev_io": 0, 00:11:59.534 "completed_nvme_io": 0, 00:11:59.534 "transports": [ 00:11:59.534 { 00:11:59.534 "trtype": "TCP" 00:11:59.534 } 00:11:59.534 ] 00:11:59.534 } 00:11:59.534 ] 00:11:59.534 }' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 Malloc1 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 [2024-11-20 09:50:32.821159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:59.534 [2024-11-20 09:50:32.849833] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:59.534 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:59.534 could not add new controller: failed to write to nvme-fabrics device 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.534 09:50:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.469 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.469 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.469 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.469 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.469 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:03.001 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.001 [2024-11-20 09:50:36.174882] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:03.001 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:03.001 could not add new controller: failed to write to nvme-fabrics device 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:03.001 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.002 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.936 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.936 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.936 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.936 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.936 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:05.836 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.095 [2024-11-20 09:50:39.518187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.095 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.472 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.472 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.472 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.472 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.472 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.394 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.394 [2024-11-20 09:50:42.832359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.395 09:50:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.772 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.772 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.772 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.772 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.772 09:50:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 [2024-11-20 09:50:46.185360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.676 09:50:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.053 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.053 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.053 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.053 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.053 09:50:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 [2024-11-20 09:50:49.461727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.957 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.958 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.958 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.958 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.958 09:50:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.333 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.333 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.333 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.333 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.333 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.234 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.493 [2024-11-20 09:50:52.821811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.493 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.428 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.428 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.428 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.428 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.428 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.960 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.960 [2024-11-20 09:50:56.145949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.960 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 [2024-11-20 09:50:56.193953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 [2024-11-20 09:50:56.242083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.961 [2024-11-20 09:50:56.290250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.961 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 [2024-11-20 09:50:56.338418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:22.962 "tick_rate": 2100000000, 00:12:22.962 "poll_groups": [ 00:12:22.962 { 00:12:22.962 "name": "nvmf_tgt_poll_group_000", 00:12:22.962 "admin_qpairs": 2, 00:12:22.962 "io_qpairs": 168, 00:12:22.962 "current_admin_qpairs": 0, 00:12:22.962 "current_io_qpairs": 0, 00:12:22.962 "pending_bdev_io": 0, 00:12:22.962 "completed_nvme_io": 168, 00:12:22.962 "transports": [ 00:12:22.962 { 00:12:22.962 "trtype": "TCP" 00:12:22.962 } 00:12:22.962 ] 00:12:22.962 }, 00:12:22.962 { 00:12:22.962 "name": "nvmf_tgt_poll_group_001", 00:12:22.962 "admin_qpairs": 2, 00:12:22.962 "io_qpairs": 168, 00:12:22.962 "current_admin_qpairs": 0, 00:12:22.962 "current_io_qpairs": 0, 00:12:22.962 "pending_bdev_io": 0, 00:12:22.962 "completed_nvme_io": 318, 00:12:22.962 "transports": [ 00:12:22.962 { 00:12:22.962 "trtype": "TCP" 00:12:22.962 } 00:12:22.962 ] 00:12:22.962 }, 00:12:22.962 { 00:12:22.962 "name": "nvmf_tgt_poll_group_002", 00:12:22.962 "admin_qpairs": 1, 00:12:22.962 "io_qpairs": 168, 00:12:22.962 "current_admin_qpairs": 0, 00:12:22.962 "current_io_qpairs": 0, 00:12:22.962 "pending_bdev_io": 0, 00:12:22.962 "completed_nvme_io": 268, 00:12:22.962 "transports": [ 00:12:22.962 { 00:12:22.962 "trtype": "TCP" 00:12:22.962 } 00:12:22.962 ] 00:12:22.962 }, 00:12:22.962 { 00:12:22.962 "name": "nvmf_tgt_poll_group_003", 00:12:22.962 "admin_qpairs": 2, 00:12:22.962 "io_qpairs": 168, 00:12:22.962 "current_admin_qpairs": 0, 00:12:22.962 "current_io_qpairs": 0, 00:12:22.962 "pending_bdev_io": 0, 00:12:22.962 "completed_nvme_io": 268, 00:12:22.962 "transports": [ 00:12:22.962 { 00:12:22.962 "trtype": "TCP" 00:12:22.962 } 00:12:22.962 ] 00:12:22.962 } 00:12:22.962 ] 00:12:22.962 }' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.962 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.962 rmmod nvme_tcp 00:12:22.962 rmmod nvme_fabrics 00:12:22.962 rmmod nvme_keyring 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2586926 ']' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2586926 ']' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586926' 00:12:23.222 killing process with pid 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2586926 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.222 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.760 00:12:25.760 real 0m32.834s 00:12:25.760 user 1m38.823s 00:12:25.760 sys 0m6.593s 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.760 ************************************ 00:12:25.760 END TEST nvmf_rpc 00:12:25.760 ************************************ 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.760 ************************************ 00:12:25.760 START TEST nvmf_invalid 00:12:25.760 ************************************ 00:12:25.760 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.760 * Looking for test storage... 00:12:25.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.760 --rc genhtml_branch_coverage=1 00:12:25.760 --rc genhtml_function_coverage=1 00:12:25.760 --rc genhtml_legend=1 00:12:25.760 --rc geninfo_all_blocks=1 00:12:25.760 --rc geninfo_unexecuted_blocks=1 00:12:25.760 00:12:25.760 ' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.760 --rc genhtml_branch_coverage=1 00:12:25.760 --rc genhtml_function_coverage=1 00:12:25.760 --rc genhtml_legend=1 00:12:25.760 --rc geninfo_all_blocks=1 00:12:25.760 --rc geninfo_unexecuted_blocks=1 00:12:25.760 00:12:25.760 ' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.760 --rc genhtml_branch_coverage=1 00:12:25.760 --rc genhtml_function_coverage=1 00:12:25.760 --rc genhtml_legend=1 00:12:25.760 --rc geninfo_all_blocks=1 00:12:25.760 --rc geninfo_unexecuted_blocks=1 00:12:25.760 00:12:25.760 ' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.760 --rc genhtml_branch_coverage=1 00:12:25.760 --rc genhtml_function_coverage=1 00:12:25.760 --rc genhtml_legend=1 00:12:25.760 --rc geninfo_all_blocks=1 00:12:25.760 --rc geninfo_unexecuted_blocks=1 00:12:25.760 00:12:25.760 ' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.760 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.761 09:50:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.333 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:32.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:32.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:32.334 Found net devices under 0000:86:00.0: cvl_0_0 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:32.334 Found net devices under 0000:86:00.1: cvl_0_1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.334 09:51:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:12:32.334 00:12:32.334 --- 10.0.0.2 ping statistics --- 00:12:32.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.334 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:32.334 00:12:32.334 --- 10.0.0.1 ping statistics --- 00:12:32.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.334 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2594948 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2594948 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2594948 ']' 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.334 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.335 [2024-11-20 09:51:05.192788] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:12:32.335 [2024-11-20 09:51:05.192840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.335 [2024-11-20 09:51:05.274423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.335 [2024-11-20 09:51:05.317129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.335 [2024-11-20 09:51:05.317167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.335 [2024-11-20 09:51:05.317173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.335 [2024-11-20 09:51:05.317179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.335 [2024-11-20 09:51:05.317184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.335 [2024-11-20 09:51:05.318697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.335 [2024-11-20 09:51:05.318836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.335 [2024-11-20 09:51:05.318944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.335 [2024-11-20 09:51:05.318945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26765 00:12:32.335 [2024-11-20 09:51:05.643848] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:32.335 { 00:12:32.335 "nqn": "nqn.2016-06.io.spdk:cnode26765", 00:12:32.335 "tgt_name": "foobar", 00:12:32.335 "method": "nvmf_create_subsystem", 00:12:32.335 "req_id": 1 00:12:32.335 } 00:12:32.335 Got JSON-RPC error response 00:12:32.335 response: 00:12:32.335 { 00:12:32.335 "code": -32603, 00:12:32.335 "message": "Unable to find target foobar" 00:12:32.335 }' 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:32.335 { 00:12:32.335 "nqn": "nqn.2016-06.io.spdk:cnode26765", 00:12:32.335 "tgt_name": "foobar", 00:12:32.335 "method": "nvmf_create_subsystem", 00:12:32.335 "req_id": 1 00:12:32.335 } 00:12:32.335 Got JSON-RPC error response 00:12:32.335 response: 00:12:32.335 { 00:12:32.335 "code": -32603, 00:12:32.335 "message": "Unable to find target foobar" 00:12:32.335 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8784 00:12:32.335 [2024-11-20 09:51:05.856594] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8784: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:32.335 { 00:12:32.335 "nqn": "nqn.2016-06.io.spdk:cnode8784", 00:12:32.335 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:32.335 "method": "nvmf_create_subsystem", 00:12:32.335 "req_id": 1 00:12:32.335 } 00:12:32.335 Got JSON-RPC error response 00:12:32.335 response: 00:12:32.335 { 00:12:32.335 "code": -32602, 00:12:32.335 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:32.335 }' 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:32.335 { 00:12:32.335 "nqn": "nqn.2016-06.io.spdk:cnode8784", 00:12:32.335 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:32.335 "method": "nvmf_create_subsystem", 00:12:32.335 "req_id": 1 00:12:32.335 } 00:12:32.335 Got JSON-RPC error response 00:12:32.335 response: 00:12:32.335 { 00:12:32.335 "code": -32602, 00:12:32.335 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:32.335 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:32.335 09:51:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20735 00:12:32.595 [2024-11-20 09:51:06.069289] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20735: invalid model number 'SPDK_Controller' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:32.595 { 00:12:32.595 "nqn": "nqn.2016-06.io.spdk:cnode20735", 00:12:32.595 "model_number": "SPDK_Controller\u001f", 00:12:32.595 "method": "nvmf_create_subsystem", 00:12:32.595 "req_id": 1 00:12:32.595 } 00:12:32.595 Got JSON-RPC error response 00:12:32.595 response: 00:12:32.595 { 00:12:32.595 "code": -32602, 00:12:32.595 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.595 }' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:32.595 { 00:12:32.595 "nqn": "nqn.2016-06.io.spdk:cnode20735", 00:12:32.595 "model_number": "SPDK_Controller\u001f", 00:12:32.595 "method": "nvmf_create_subsystem", 00:12:32.595 "req_id": 1 00:12:32.595 } 00:12:32.595 Got JSON-RPC error response 00:12:32.595 response: 00:12:32.595 { 00:12:32.595 "code": -32602, 00:12:32.595 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.595 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.595 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.596 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%>Qy">HuJaA=F'\''{9;4|T' 00:12:32.856 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%>Qy">HuJaA=F'\''{9;4|T' nqn.2016-06.io.spdk:cnode16371 00:12:33.116 [2024-11-20 09:51:06.442559] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16371: invalid serial number '%>Qy">HuJaA=F'{9;4|T' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:33.116 { 00:12:33.116 "nqn": "nqn.2016-06.io.spdk:cnode16371", 00:12:33.116 "serial_number": "%>Q\u007fy\">HuJaA=F'\''{9;4|T", 00:12:33.116 "method": "nvmf_create_subsystem", 00:12:33.116 "req_id": 1 00:12:33.116 } 00:12:33.116 Got JSON-RPC error response 00:12:33.116 response: 00:12:33.116 { 00:12:33.116 "code": -32602, 00:12:33.116 "message": "Invalid SN %>Q\u007fy\">HuJaA=F'\''{9;4|T" 00:12:33.116 }' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:33.116 { 00:12:33.116 "nqn": "nqn.2016-06.io.spdk:cnode16371", 00:12:33.116 "serial_number": "%>Q\u007fy\">HuJaA=F'{9;4|T", 00:12:33.116 "method": "nvmf_create_subsystem", 00:12:33.116 "req_id": 1 00:12:33.116 } 00:12:33.116 Got JSON-RPC error response 00:12:33.116 response: 00:12:33.116 { 00:12:33.116 "code": -32602, 00:12:33.116 "message": "Invalid SN %>Q\u007fy\">HuJaA=F'{9;4|T" 00:12:33.116 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.116 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.117 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh"y?P=(5' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh"y?P=(5' nqn.2016-06.io.spdk:cnode28793 00:12:33.375 [2024-11-20 09:51:06.912117] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28793: invalid model number '~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh"y?P=(5' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:33.375 { 00:12:33.375 "nqn": "nqn.2016-06.io.spdk:cnode28793", 00:12:33.375 "model_number": "~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh\"y?P=(5", 00:12:33.375 "method": "nvmf_create_subsystem", 00:12:33.375 "req_id": 1 00:12:33.375 } 00:12:33.375 Got JSON-RPC error response 00:12:33.375 response: 00:12:33.375 { 00:12:33.375 "code": -32602, 00:12:33.375 "message": "Invalid MN ~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh\"y?P=(5" 00:12:33.375 }' 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:33.375 { 00:12:33.375 "nqn": "nqn.2016-06.io.spdk:cnode28793", 00:12:33.375 "model_number": "~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh\"y?P=(5", 00:12:33.375 "method": "nvmf_create_subsystem", 00:12:33.375 "req_id": 1 00:12:33.375 } 00:12:33.375 Got JSON-RPC error response 00:12:33.375 response: 00:12:33.375 { 00:12:33.375 "code": -32602, 00:12:33.375 "message": "Invalid MN ~9A0%ATc!!2o`>M-P=54X;D~8FEi.|bNbh\"y?P=(5" 00:12:33.375 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:33.375 09:51:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:33.633 [2024-11-20 09:51:07.116874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.633 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:33.890 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:33.891 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:33.891 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:33.891 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:33.891 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:34.148 [2024-11-20 09:51:07.506133] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:34.148 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:34.148 { 00:12:34.148 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:34.148 "listen_address": { 00:12:34.148 "trtype": "tcp", 00:12:34.148 "traddr": "", 00:12:34.148 "trsvcid": "4421" 00:12:34.148 }, 00:12:34.148 "method": "nvmf_subsystem_remove_listener", 00:12:34.148 "req_id": 1 00:12:34.148 } 00:12:34.148 Got JSON-RPC error response 00:12:34.148 response: 00:12:34.148 { 00:12:34.148 "code": -32602, 00:12:34.148 "message": "Invalid parameters" 00:12:34.148 }' 00:12:34.148 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:34.148 { 00:12:34.148 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:34.148 "listen_address": { 00:12:34.148 "trtype": "tcp", 00:12:34.148 "traddr": "", 00:12:34.148 "trsvcid": "4421" 00:12:34.148 }, 00:12:34.148 "method": "nvmf_subsystem_remove_listener", 00:12:34.148 "req_id": 1 00:12:34.148 } 00:12:34.148 Got JSON-RPC error response 00:12:34.148 response: 00:12:34.148 { 00:12:34.148 "code": -32602, 00:12:34.148 "message": "Invalid parameters" 00:12:34.148 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:34.148 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9482 -i 0 00:12:34.148 [2024-11-20 09:51:07.706755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9482: invalid cntlid range [0-65519] 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:34.406 { 00:12:34.406 "nqn": "nqn.2016-06.io.spdk:cnode9482", 00:12:34.406 "min_cntlid": 0, 00:12:34.406 "method": "nvmf_create_subsystem", 00:12:34.406 "req_id": 1 00:12:34.406 } 00:12:34.406 Got JSON-RPC error response 00:12:34.406 response: 00:12:34.406 { 00:12:34.406 "code": -32602, 00:12:34.406 "message": "Invalid cntlid range [0-65519]" 00:12:34.406 }' 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:34.406 { 00:12:34.406 "nqn": "nqn.2016-06.io.spdk:cnode9482", 00:12:34.406 "min_cntlid": 0, 00:12:34.406 "method": "nvmf_create_subsystem", 00:12:34.406 "req_id": 1 00:12:34.406 } 00:12:34.406 Got JSON-RPC error response 00:12:34.406 response: 00:12:34.406 { 00:12:34.406 "code": -32602, 00:12:34.406 "message": "Invalid cntlid range [0-65519]" 00:12:34.406 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19581 -i 65520 00:12:34.406 [2024-11-20 09:51:07.919476] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19581: invalid cntlid range [65520-65519] 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:34.406 { 00:12:34.406 "nqn": "nqn.2016-06.io.spdk:cnode19581", 00:12:34.406 "min_cntlid": 65520, 00:12:34.406 "method": "nvmf_create_subsystem", 00:12:34.406 "req_id": 1 00:12:34.406 } 00:12:34.406 Got JSON-RPC error response 00:12:34.406 response: 00:12:34.406 { 00:12:34.406 "code": -32602, 00:12:34.406 "message": "Invalid cntlid range [65520-65519]" 00:12:34.406 }' 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:34.406 { 00:12:34.406 "nqn": "nqn.2016-06.io.spdk:cnode19581", 00:12:34.406 "min_cntlid": 65520, 00:12:34.406 "method": "nvmf_create_subsystem", 00:12:34.406 "req_id": 1 00:12:34.406 } 00:12:34.406 Got JSON-RPC error response 00:12:34.406 response: 00:12:34.406 { 00:12:34.406 "code": -32602, 00:12:34.406 "message": "Invalid cntlid range [65520-65519]" 00:12:34.406 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.406 09:51:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15366 -I 0 00:12:34.663 [2024-11-20 09:51:08.120193] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15366: invalid cntlid range [1-0] 00:12:34.663 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:34.663 { 00:12:34.663 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:12:34.663 "max_cntlid": 0, 00:12:34.663 "method": "nvmf_create_subsystem", 00:12:34.663 "req_id": 1 00:12:34.663 } 00:12:34.663 Got JSON-RPC error response 00:12:34.663 response: 00:12:34.663 { 00:12:34.663 "code": -32602, 00:12:34.663 "message": "Invalid cntlid range [1-0]" 00:12:34.663 }' 00:12:34.663 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:34.663 { 00:12:34.663 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:12:34.663 "max_cntlid": 0, 00:12:34.663 "method": "nvmf_create_subsystem", 00:12:34.663 "req_id": 1 00:12:34.663 } 00:12:34.663 Got JSON-RPC error response 00:12:34.663 response: 00:12:34.663 { 00:12:34.663 "code": -32602, 00:12:34.663 "message": "Invalid cntlid range [1-0]" 00:12:34.663 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.663 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15833 -I 65520 00:12:34.921 [2024-11-20 09:51:08.316879] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15833: invalid cntlid range [1-65520] 00:12:34.921 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:34.921 { 00:12:34.921 "nqn": "nqn.2016-06.io.spdk:cnode15833", 00:12:34.921 "max_cntlid": 65520, 00:12:34.921 "method": "nvmf_create_subsystem", 00:12:34.921 "req_id": 1 00:12:34.921 } 00:12:34.921 Got JSON-RPC error response 00:12:34.921 response: 00:12:34.921 { 00:12:34.921 "code": -32602, 00:12:34.921 "message": "Invalid cntlid range [1-65520]" 00:12:34.921 }' 00:12:34.921 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:34.921 { 00:12:34.921 "nqn": "nqn.2016-06.io.spdk:cnode15833", 00:12:34.921 "max_cntlid": 65520, 00:12:34.921 "method": "nvmf_create_subsystem", 00:12:34.921 "req_id": 1 00:12:34.921 } 00:12:34.921 Got JSON-RPC error response 00:12:34.921 response: 00:12:34.921 { 00:12:34.921 "code": -32602, 00:12:34.921 "message": "Invalid cntlid range [1-65520]" 00:12:34.921 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:34.921 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19504 -i 6 -I 5 00:12:35.179 [2024-11-20 09:51:08.517558] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19504: invalid cntlid range [6-5] 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:35.179 { 00:12:35.179 "nqn": "nqn.2016-06.io.spdk:cnode19504", 00:12:35.179 "min_cntlid": 6, 00:12:35.179 "max_cntlid": 5, 00:12:35.179 "method": "nvmf_create_subsystem", 00:12:35.179 "req_id": 1 00:12:35.179 } 00:12:35.179 Got JSON-RPC error response 00:12:35.179 response: 00:12:35.179 { 00:12:35.179 "code": -32602, 00:12:35.179 "message": "Invalid cntlid range [6-5]" 00:12:35.179 }' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:35.179 { 00:12:35.179 "nqn": "nqn.2016-06.io.spdk:cnode19504", 00:12:35.179 "min_cntlid": 6, 00:12:35.179 "max_cntlid": 5, 00:12:35.179 "method": "nvmf_create_subsystem", 00:12:35.179 "req_id": 1 00:12:35.179 } 00:12:35.179 Got JSON-RPC error response 00:12:35.179 response: 00:12:35.179 { 00:12:35.179 "code": -32602, 00:12:35.179 "message": "Invalid cntlid range [6-5]" 00:12:35.179 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:35.179 { 00:12:35.179 "name": "foobar", 00:12:35.179 "method": "nvmf_delete_target", 00:12:35.179 "req_id": 1 00:12:35.179 } 00:12:35.179 Got JSON-RPC error response 00:12:35.179 response: 00:12:35.179 { 00:12:35.179 "code": -32602, 00:12:35.179 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:35.179 }' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:35.179 { 00:12:35.179 "name": "foobar", 00:12:35.179 "method": "nvmf_delete_target", 00:12:35.179 "req_id": 1 00:12:35.179 } 00:12:35.179 Got JSON-RPC error response 00:12:35.179 response: 00:12:35.179 { 00:12:35.179 "code": -32602, 00:12:35.179 "message": "The specified target doesn't exist, cannot delete it." 00:12:35.179 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.179 rmmod nvme_tcp 00:12:35.179 rmmod nvme_fabrics 00:12:35.179 rmmod nvme_keyring 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2594948 ']' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2594948 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2594948 ']' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2594948 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.179 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594948 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594948' 00:12:35.438 killing process with pid 2594948 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2594948 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2594948 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.438 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.979 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.979 00:12:37.979 real 0m12.061s 00:12:37.979 user 0m18.640s 00:12:37.979 sys 0m5.458s 00:12:37.979 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:37.979 ************************************ 00:12:37.979 END TEST nvmf_invalid 00:12:37.979 ************************************ 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.979 ************************************ 00:12:37.979 START TEST nvmf_connect_stress 00:12:37.979 ************************************ 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:37.979 * Looking for test storage... 00:12:37.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.979 --rc genhtml_branch_coverage=1 00:12:37.979 --rc genhtml_function_coverage=1 00:12:37.979 --rc genhtml_legend=1 00:12:37.979 --rc geninfo_all_blocks=1 00:12:37.979 --rc geninfo_unexecuted_blocks=1 00:12:37.979 00:12:37.979 ' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.979 --rc genhtml_branch_coverage=1 00:12:37.979 --rc genhtml_function_coverage=1 00:12:37.979 --rc genhtml_legend=1 00:12:37.979 --rc geninfo_all_blocks=1 00:12:37.979 --rc geninfo_unexecuted_blocks=1 00:12:37.979 00:12:37.979 ' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.979 --rc genhtml_branch_coverage=1 00:12:37.979 --rc genhtml_function_coverage=1 00:12:37.979 --rc genhtml_legend=1 00:12:37.979 --rc geninfo_all_blocks=1 00:12:37.979 --rc geninfo_unexecuted_blocks=1 00:12:37.979 00:12:37.979 ' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.979 --rc genhtml_branch_coverage=1 00:12:37.979 --rc genhtml_function_coverage=1 00:12:37.979 --rc genhtml_legend=1 00:12:37.979 --rc geninfo_all_blocks=1 00:12:37.979 --rc geninfo_unexecuted_blocks=1 00:12:37.979 00:12:37.979 ' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.979 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.980 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.555 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.556 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.556 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.556 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.556 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.556 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:12:44.556 00:12:44.556 --- 10.0.0.2 ping statistics --- 00:12:44.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.556 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:44.556 00:12:44.556 --- 10.0.0.1 ping statistics --- 00:12:44.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.556 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2599424 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2599424 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2599424 ']' 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.556 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.557 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.557 [2024-11-20 09:51:17.330489] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:12:44.557 [2024-11-20 09:51:17.330542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.557 [2024-11-20 09:51:17.411637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:44.557 [2024-11-20 09:51:17.450981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.557 [2024-11-20 09:51:17.451017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.557 [2024-11-20 09:51:17.451024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.557 [2024-11-20 09:51:17.451030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.557 [2024-11-20 09:51:17.451036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.557 [2024-11-20 09:51:17.452487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.557 [2024-11-20 09:51:17.452594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.557 [2024-11-20 09:51:17.452594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.815 [2024-11-20 09:51:18.206298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.815 [2024-11-20 09:51:18.226538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.815 NULL1 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2599668 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.815 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.816 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.074 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.074 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:45.074 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.074 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.074 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.639 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.639 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:45.639 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.639 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.639 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.898 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.898 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:45.898 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.898 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.898 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.156 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.156 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:46.156 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.156 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.156 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:46.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.414 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.980 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.980 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:46.980 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.980 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.980 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.238 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.238 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:47.238 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.238 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.238 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.496 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.496 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:47.496 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.496 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.496 09:51:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.754 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.754 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:47.754 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.754 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.754 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.012 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.012 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:48.012 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.012 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.012 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.578 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.578 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:48.578 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.578 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.578 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.836 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.836 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:48.836 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.836 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.836 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.094 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.094 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:49.094 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.094 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.094 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.352 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.352 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:49.352 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.352 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.352 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.918 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.918 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:49.918 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.918 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.918 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.177 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.177 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:50.177 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.177 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.177 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.435 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.435 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:50.435 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.435 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.435 09:51:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.693 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.693 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:50.693 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.693 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.693 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.950 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.950 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:50.950 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.950 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.950 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.516 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.516 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:51.516 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.516 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.516 09:51:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.774 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.774 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:51.774 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.775 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.775 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.032 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.032 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:52.032 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.032 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.032 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.322 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.322 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:52.322 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.322 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.322 09:51:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.594 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.594 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:52.594 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.594 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.595 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.204 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.204 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:53.204 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.204 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.204 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.563 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:53.563 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.563 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.563 09:51:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.563 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:53.563 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.563 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.563 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.129 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.129 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:54.129 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.129 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.129 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.388 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:54.388 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.388 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.388 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.646 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.646 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:54.646 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.646 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.646 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.905 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2599668 00:12:54.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2599668) - No such process 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2599668 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.905 rmmod nvme_tcp 00:12:54.905 rmmod nvme_fabrics 00:12:54.905 rmmod nvme_keyring 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2599424 ']' 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2599424 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2599424 ']' 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2599424 00:12:54.905 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599424 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599424' 00:12:55.164 killing process with pid 2599424 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2599424 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2599424 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.164 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.165 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.165 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.165 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.165 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.700 00:12:57.700 real 0m19.693s 00:12:57.700 user 0m41.501s 00:12:57.700 sys 0m8.597s 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.700 ************************************ 00:12:57.700 END TEST nvmf_connect_stress 00:12:57.700 ************************************ 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.700 ************************************ 00:12:57.700 START TEST nvmf_fused_ordering 00:12:57.700 ************************************ 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:57.700 * Looking for test storage... 00:12:57.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.700 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:57.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.700 --rc genhtml_branch_coverage=1 00:12:57.700 --rc genhtml_function_coverage=1 00:12:57.700 --rc genhtml_legend=1 00:12:57.700 --rc geninfo_all_blocks=1 00:12:57.700 --rc geninfo_unexecuted_blocks=1 00:12:57.700 00:12:57.700 ' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.700 --rc genhtml_branch_coverage=1 00:12:57.700 --rc genhtml_function_coverage=1 00:12:57.700 --rc genhtml_legend=1 00:12:57.700 --rc geninfo_all_blocks=1 00:12:57.700 --rc geninfo_unexecuted_blocks=1 00:12:57.700 00:12:57.700 ' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.700 --rc genhtml_branch_coverage=1 00:12:57.700 --rc genhtml_function_coverage=1 00:12:57.700 --rc genhtml_legend=1 00:12:57.700 --rc geninfo_all_blocks=1 00:12:57.700 --rc geninfo_unexecuted_blocks=1 00:12:57.700 00:12:57.700 ' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.700 --rc genhtml_branch_coverage=1 00:12:57.700 --rc genhtml_function_coverage=1 00:12:57.700 --rc genhtml_legend=1 00:12:57.700 --rc geninfo_all_blocks=1 00:12:57.700 --rc geninfo_unexecuted_blocks=1 00:12:57.700 00:12:57.700 ' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.700 09:51:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.294 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.294 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.295 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.295 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.295 09:51:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:13:04.295 00:13:04.295 --- 10.0.0.2 ping statistics --- 00:13:04.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.295 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:13:04.295 00:13:04.295 --- 10.0.0.1 ping statistics --- 00:13:04.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.295 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2604842 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2604842 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2604842 ']' 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 [2024-11-20 09:51:37.136325] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:04.295 [2024-11-20 09:51:37.136368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.295 [2024-11-20 09:51:37.214379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.295 [2024-11-20 09:51:37.255585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.295 [2024-11-20 09:51:37.255616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.295 [2024-11-20 09:51:37.255626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.295 [2024-11-20 09:51:37.255632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.295 [2024-11-20 09:51:37.255637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.295 [2024-11-20 09:51:37.256190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 [2024-11-20 09:51:37.404450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 [2024-11-20 09:51:37.424657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 NULL1 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.295 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.296 09:51:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:04.296 [2024-11-20 09:51:37.482902] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:04.296 [2024-11-20 09:51:37.482934] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604996 ] 00:13:04.296 Attached to nqn.2016-06.io.spdk:cnode1 00:13:04.296 Namespace ID: 1 size: 1GB 00:13:04.296 fused_ordering(0) 00:13:04.296 fused_ordering(1) 00:13:04.296 fused_ordering(2) 00:13:04.296 fused_ordering(3) 00:13:04.296 fused_ordering(4) 00:13:04.296 fused_ordering(5) 00:13:04.296 fused_ordering(6) 00:13:04.296 fused_ordering(7) 00:13:04.296 fused_ordering(8) 00:13:04.296 fused_ordering(9) 00:13:04.296 fused_ordering(10) 00:13:04.296 fused_ordering(11) 00:13:04.296 fused_ordering(12) 00:13:04.296 fused_ordering(13) 00:13:04.296 fused_ordering(14) 00:13:04.296 fused_ordering(15) 00:13:04.296 fused_ordering(16) 00:13:04.296 fused_ordering(17) 00:13:04.296 fused_ordering(18) 00:13:04.296 fused_ordering(19) 00:13:04.296 fused_ordering(20) 00:13:04.296 fused_ordering(21) 00:13:04.296 fused_ordering(22) 00:13:04.296 fused_ordering(23) 00:13:04.296 fused_ordering(24) 00:13:04.296 fused_ordering(25) 00:13:04.296 fused_ordering(26) 00:13:04.296 fused_ordering(27) 00:13:04.296 fused_ordering(28) 00:13:04.296 fused_ordering(29) 00:13:04.296 fused_ordering(30) 00:13:04.296 fused_ordering(31) 00:13:04.296 fused_ordering(32) 00:13:04.296 fused_ordering(33) 00:13:04.296 fused_ordering(34) 00:13:04.296 fused_ordering(35) 00:13:04.296 fused_ordering(36) 00:13:04.296 fused_ordering(37) 00:13:04.296 fused_ordering(38) 00:13:04.296 fused_ordering(39) 00:13:04.296 fused_ordering(40) 00:13:04.296 fused_ordering(41) 00:13:04.296 fused_ordering(42) 00:13:04.296 fused_ordering(43) 00:13:04.296 fused_ordering(44) 00:13:04.296 fused_ordering(45) 00:13:04.296 fused_ordering(46) 00:13:04.296 fused_ordering(47) 00:13:04.296 fused_ordering(48) 00:13:04.296 fused_ordering(49) 00:13:04.296 fused_ordering(50) 00:13:04.296 fused_ordering(51) 00:13:04.296 fused_ordering(52) 00:13:04.296 fused_ordering(53) 00:13:04.296 fused_ordering(54) 00:13:04.296 fused_ordering(55) 00:13:04.296 fused_ordering(56) 00:13:04.296 fused_ordering(57) 00:13:04.296 fused_ordering(58) 00:13:04.296 fused_ordering(59) 00:13:04.296 fused_ordering(60) 00:13:04.296 fused_ordering(61) 00:13:04.296 fused_ordering(62) 00:13:04.296 fused_ordering(63) 00:13:04.296 fused_ordering(64) 00:13:04.296 fused_ordering(65) 00:13:04.296 fused_ordering(66) 00:13:04.296 fused_ordering(67) 00:13:04.296 fused_ordering(68) 00:13:04.296 fused_ordering(69) 00:13:04.296 fused_ordering(70) 00:13:04.296 fused_ordering(71) 00:13:04.296 fused_ordering(72) 00:13:04.296 fused_ordering(73) 00:13:04.296 fused_ordering(74) 00:13:04.296 fused_ordering(75) 00:13:04.296 fused_ordering(76) 00:13:04.296 fused_ordering(77) 00:13:04.296 fused_ordering(78) 00:13:04.296 fused_ordering(79) 00:13:04.296 fused_ordering(80) 00:13:04.296 fused_ordering(81) 00:13:04.296 fused_ordering(82) 00:13:04.296 fused_ordering(83) 00:13:04.296 fused_ordering(84) 00:13:04.296 fused_ordering(85) 00:13:04.296 fused_ordering(86) 00:13:04.296 fused_ordering(87) 00:13:04.296 fused_ordering(88) 00:13:04.296 fused_ordering(89) 00:13:04.296 fused_ordering(90) 00:13:04.296 fused_ordering(91) 00:13:04.296 fused_ordering(92) 00:13:04.296 fused_ordering(93) 00:13:04.296 fused_ordering(94) 00:13:04.296 fused_ordering(95) 00:13:04.296 fused_ordering(96) 00:13:04.296 fused_ordering(97) 00:13:04.296 fused_ordering(98) 00:13:04.296 fused_ordering(99) 00:13:04.296 fused_ordering(100) 00:13:04.296 fused_ordering(101) 00:13:04.296 fused_ordering(102) 00:13:04.296 fused_ordering(103) 00:13:04.296 fused_ordering(104) 00:13:04.296 fused_ordering(105) 00:13:04.296 fused_ordering(106) 00:13:04.296 fused_ordering(107) 00:13:04.296 fused_ordering(108) 00:13:04.296 fused_ordering(109) 00:13:04.296 fused_ordering(110) 00:13:04.296 fused_ordering(111) 00:13:04.296 fused_ordering(112) 00:13:04.296 fused_ordering(113) 00:13:04.296 fused_ordering(114) 00:13:04.296 fused_ordering(115) 00:13:04.296 fused_ordering(116) 00:13:04.296 fused_ordering(117) 00:13:04.296 fused_ordering(118) 00:13:04.296 fused_ordering(119) 00:13:04.296 fused_ordering(120) 00:13:04.296 fused_ordering(121) 00:13:04.296 fused_ordering(122) 00:13:04.296 fused_ordering(123) 00:13:04.296 fused_ordering(124) 00:13:04.296 fused_ordering(125) 00:13:04.296 fused_ordering(126) 00:13:04.296 fused_ordering(127) 00:13:04.296 fused_ordering(128) 00:13:04.296 fused_ordering(129) 00:13:04.296 fused_ordering(130) 00:13:04.296 fused_ordering(131) 00:13:04.296 fused_ordering(132) 00:13:04.296 fused_ordering(133) 00:13:04.296 fused_ordering(134) 00:13:04.296 fused_ordering(135) 00:13:04.296 fused_ordering(136) 00:13:04.296 fused_ordering(137) 00:13:04.296 fused_ordering(138) 00:13:04.296 fused_ordering(139) 00:13:04.296 fused_ordering(140) 00:13:04.296 fused_ordering(141) 00:13:04.296 fused_ordering(142) 00:13:04.296 fused_ordering(143) 00:13:04.296 fused_ordering(144) 00:13:04.296 fused_ordering(145) 00:13:04.296 fused_ordering(146) 00:13:04.296 fused_ordering(147) 00:13:04.296 fused_ordering(148) 00:13:04.296 fused_ordering(149) 00:13:04.296 fused_ordering(150) 00:13:04.296 fused_ordering(151) 00:13:04.296 fused_ordering(152) 00:13:04.296 fused_ordering(153) 00:13:04.296 fused_ordering(154) 00:13:04.296 fused_ordering(155) 00:13:04.296 fused_ordering(156) 00:13:04.296 fused_ordering(157) 00:13:04.296 fused_ordering(158) 00:13:04.296 fused_ordering(159) 00:13:04.296 fused_ordering(160) 00:13:04.296 fused_ordering(161) 00:13:04.296 fused_ordering(162) 00:13:04.296 fused_ordering(163) 00:13:04.296 fused_ordering(164) 00:13:04.296 fused_ordering(165) 00:13:04.296 fused_ordering(166) 00:13:04.296 fused_ordering(167) 00:13:04.296 fused_ordering(168) 00:13:04.296 fused_ordering(169) 00:13:04.296 fused_ordering(170) 00:13:04.296 fused_ordering(171) 00:13:04.296 fused_ordering(172) 00:13:04.296 fused_ordering(173) 00:13:04.296 fused_ordering(174) 00:13:04.296 fused_ordering(175) 00:13:04.296 fused_ordering(176) 00:13:04.296 fused_ordering(177) 00:13:04.296 fused_ordering(178) 00:13:04.296 fused_ordering(179) 00:13:04.296 fused_ordering(180) 00:13:04.296 fused_ordering(181) 00:13:04.296 fused_ordering(182) 00:13:04.296 fused_ordering(183) 00:13:04.296 fused_ordering(184) 00:13:04.296 fused_ordering(185) 00:13:04.296 fused_ordering(186) 00:13:04.296 fused_ordering(187) 00:13:04.296 fused_ordering(188) 00:13:04.296 fused_ordering(189) 00:13:04.296 fused_ordering(190) 00:13:04.296 fused_ordering(191) 00:13:04.296 fused_ordering(192) 00:13:04.296 fused_ordering(193) 00:13:04.296 fused_ordering(194) 00:13:04.296 fused_ordering(195) 00:13:04.296 fused_ordering(196) 00:13:04.296 fused_ordering(197) 00:13:04.296 fused_ordering(198) 00:13:04.296 fused_ordering(199) 00:13:04.296 fused_ordering(200) 00:13:04.296 fused_ordering(201) 00:13:04.296 fused_ordering(202) 00:13:04.296 fused_ordering(203) 00:13:04.296 fused_ordering(204) 00:13:04.296 fused_ordering(205) 00:13:04.555 fused_ordering(206) 00:13:04.555 fused_ordering(207) 00:13:04.555 fused_ordering(208) 00:13:04.555 fused_ordering(209) 00:13:04.555 fused_ordering(210) 00:13:04.555 fused_ordering(211) 00:13:04.555 fused_ordering(212) 00:13:04.555 fused_ordering(213) 00:13:04.555 fused_ordering(214) 00:13:04.555 fused_ordering(215) 00:13:04.555 fused_ordering(216) 00:13:04.555 fused_ordering(217) 00:13:04.555 fused_ordering(218) 00:13:04.555 fused_ordering(219) 00:13:04.555 fused_ordering(220) 00:13:04.555 fused_ordering(221) 00:13:04.555 fused_ordering(222) 00:13:04.555 fused_ordering(223) 00:13:04.555 fused_ordering(224) 00:13:04.555 fused_ordering(225) 00:13:04.555 fused_ordering(226) 00:13:04.555 fused_ordering(227) 00:13:04.555 fused_ordering(228) 00:13:04.556 fused_ordering(229) 00:13:04.556 fused_ordering(230) 00:13:04.556 fused_ordering(231) 00:13:04.556 fused_ordering(232) 00:13:04.556 fused_ordering(233) 00:13:04.556 fused_ordering(234) 00:13:04.556 fused_ordering(235) 00:13:04.556 fused_ordering(236) 00:13:04.556 fused_ordering(237) 00:13:04.556 fused_ordering(238) 00:13:04.556 fused_ordering(239) 00:13:04.556 fused_ordering(240) 00:13:04.556 fused_ordering(241) 00:13:04.556 fused_ordering(242) 00:13:04.556 fused_ordering(243) 00:13:04.556 fused_ordering(244) 00:13:04.556 fused_ordering(245) 00:13:04.556 fused_ordering(246) 00:13:04.556 fused_ordering(247) 00:13:04.556 fused_ordering(248) 00:13:04.556 fused_ordering(249) 00:13:04.556 fused_ordering(250) 00:13:04.556 fused_ordering(251) 00:13:04.556 fused_ordering(252) 00:13:04.556 fused_ordering(253) 00:13:04.556 fused_ordering(254) 00:13:04.556 fused_ordering(255) 00:13:04.556 fused_ordering(256) 00:13:04.556 fused_ordering(257) 00:13:04.556 fused_ordering(258) 00:13:04.556 fused_ordering(259) 00:13:04.556 fused_ordering(260) 00:13:04.556 fused_ordering(261) 00:13:04.556 fused_ordering(262) 00:13:04.556 fused_ordering(263) 00:13:04.556 fused_ordering(264) 00:13:04.556 fused_ordering(265) 00:13:04.556 fused_ordering(266) 00:13:04.556 fused_ordering(267) 00:13:04.556 fused_ordering(268) 00:13:04.556 fused_ordering(269) 00:13:04.556 fused_ordering(270) 00:13:04.556 fused_ordering(271) 00:13:04.556 fused_ordering(272) 00:13:04.556 fused_ordering(273) 00:13:04.556 fused_ordering(274) 00:13:04.556 fused_ordering(275) 00:13:04.556 fused_ordering(276) 00:13:04.556 fused_ordering(277) 00:13:04.556 fused_ordering(278) 00:13:04.556 fused_ordering(279) 00:13:04.556 fused_ordering(280) 00:13:04.556 fused_ordering(281) 00:13:04.556 fused_ordering(282) 00:13:04.556 fused_ordering(283) 00:13:04.556 fused_ordering(284) 00:13:04.556 fused_ordering(285) 00:13:04.556 fused_ordering(286) 00:13:04.556 fused_ordering(287) 00:13:04.556 fused_ordering(288) 00:13:04.556 fused_ordering(289) 00:13:04.556 fused_ordering(290) 00:13:04.556 fused_ordering(291) 00:13:04.556 fused_ordering(292) 00:13:04.556 fused_ordering(293) 00:13:04.556 fused_ordering(294) 00:13:04.556 fused_ordering(295) 00:13:04.556 fused_ordering(296) 00:13:04.556 fused_ordering(297) 00:13:04.556 fused_ordering(298) 00:13:04.556 fused_ordering(299) 00:13:04.556 fused_ordering(300) 00:13:04.556 fused_ordering(301) 00:13:04.556 fused_ordering(302) 00:13:04.556 fused_ordering(303) 00:13:04.556 fused_ordering(304) 00:13:04.556 fused_ordering(305) 00:13:04.556 fused_ordering(306) 00:13:04.556 fused_ordering(307) 00:13:04.556 fused_ordering(308) 00:13:04.556 fused_ordering(309) 00:13:04.556 fused_ordering(310) 00:13:04.556 fused_ordering(311) 00:13:04.556 fused_ordering(312) 00:13:04.556 fused_ordering(313) 00:13:04.556 fused_ordering(314) 00:13:04.556 fused_ordering(315) 00:13:04.556 fused_ordering(316) 00:13:04.556 fused_ordering(317) 00:13:04.556 fused_ordering(318) 00:13:04.556 fused_ordering(319) 00:13:04.556 fused_ordering(320) 00:13:04.556 fused_ordering(321) 00:13:04.556 fused_ordering(322) 00:13:04.556 fused_ordering(323) 00:13:04.556 fused_ordering(324) 00:13:04.556 fused_ordering(325) 00:13:04.556 fused_ordering(326) 00:13:04.556 fused_ordering(327) 00:13:04.556 fused_ordering(328) 00:13:04.556 fused_ordering(329) 00:13:04.556 fused_ordering(330) 00:13:04.556 fused_ordering(331) 00:13:04.556 fused_ordering(332) 00:13:04.556 fused_ordering(333) 00:13:04.556 fused_ordering(334) 00:13:04.556 fused_ordering(335) 00:13:04.556 fused_ordering(336) 00:13:04.556 fused_ordering(337) 00:13:04.556 fused_ordering(338) 00:13:04.556 fused_ordering(339) 00:13:04.556 fused_ordering(340) 00:13:04.556 fused_ordering(341) 00:13:04.556 fused_ordering(342) 00:13:04.556 fused_ordering(343) 00:13:04.556 fused_ordering(344) 00:13:04.556 fused_ordering(345) 00:13:04.556 fused_ordering(346) 00:13:04.556 fused_ordering(347) 00:13:04.556 fused_ordering(348) 00:13:04.556 fused_ordering(349) 00:13:04.556 fused_ordering(350) 00:13:04.556 fused_ordering(351) 00:13:04.556 fused_ordering(352) 00:13:04.556 fused_ordering(353) 00:13:04.556 fused_ordering(354) 00:13:04.556 fused_ordering(355) 00:13:04.556 fused_ordering(356) 00:13:04.556 fused_ordering(357) 00:13:04.556 fused_ordering(358) 00:13:04.556 fused_ordering(359) 00:13:04.556 fused_ordering(360) 00:13:04.556 fused_ordering(361) 00:13:04.556 fused_ordering(362) 00:13:04.556 fused_ordering(363) 00:13:04.556 fused_ordering(364) 00:13:04.556 fused_ordering(365) 00:13:04.556 fused_ordering(366) 00:13:04.556 fused_ordering(367) 00:13:04.556 fused_ordering(368) 00:13:04.556 fused_ordering(369) 00:13:04.556 fused_ordering(370) 00:13:04.556 fused_ordering(371) 00:13:04.556 fused_ordering(372) 00:13:04.556 fused_ordering(373) 00:13:04.556 fused_ordering(374) 00:13:04.556 fused_ordering(375) 00:13:04.556 fused_ordering(376) 00:13:04.556 fused_ordering(377) 00:13:04.556 fused_ordering(378) 00:13:04.556 fused_ordering(379) 00:13:04.556 fused_ordering(380) 00:13:04.556 fused_ordering(381) 00:13:04.556 fused_ordering(382) 00:13:04.556 fused_ordering(383) 00:13:04.556 fused_ordering(384) 00:13:04.556 fused_ordering(385) 00:13:04.556 fused_ordering(386) 00:13:04.556 fused_ordering(387) 00:13:04.556 fused_ordering(388) 00:13:04.556 fused_ordering(389) 00:13:04.556 fused_ordering(390) 00:13:04.556 fused_ordering(391) 00:13:04.556 fused_ordering(392) 00:13:04.556 fused_ordering(393) 00:13:04.556 fused_ordering(394) 00:13:04.556 fused_ordering(395) 00:13:04.556 fused_ordering(396) 00:13:04.556 fused_ordering(397) 00:13:04.556 fused_ordering(398) 00:13:04.556 fused_ordering(399) 00:13:04.556 fused_ordering(400) 00:13:04.556 fused_ordering(401) 00:13:04.556 fused_ordering(402) 00:13:04.556 fused_ordering(403) 00:13:04.556 fused_ordering(404) 00:13:04.556 fused_ordering(405) 00:13:04.556 fused_ordering(406) 00:13:04.556 fused_ordering(407) 00:13:04.556 fused_ordering(408) 00:13:04.556 fused_ordering(409) 00:13:04.556 fused_ordering(410) 00:13:04.816 fused_ordering(411) 00:13:04.816 fused_ordering(412) 00:13:04.816 fused_ordering(413) 00:13:04.816 fused_ordering(414) 00:13:04.816 fused_ordering(415) 00:13:04.816 fused_ordering(416) 00:13:04.816 fused_ordering(417) 00:13:04.816 fused_ordering(418) 00:13:04.816 fused_ordering(419) 00:13:04.816 fused_ordering(420) 00:13:04.816 fused_ordering(421) 00:13:04.816 fused_ordering(422) 00:13:04.816 fused_ordering(423) 00:13:04.816 fused_ordering(424) 00:13:04.816 fused_ordering(425) 00:13:04.816 fused_ordering(426) 00:13:04.816 fused_ordering(427) 00:13:04.816 fused_ordering(428) 00:13:04.816 fused_ordering(429) 00:13:04.816 fused_ordering(430) 00:13:04.816 fused_ordering(431) 00:13:04.816 fused_ordering(432) 00:13:04.816 fused_ordering(433) 00:13:04.816 fused_ordering(434) 00:13:04.816 fused_ordering(435) 00:13:04.816 fused_ordering(436) 00:13:04.816 fused_ordering(437) 00:13:04.816 fused_ordering(438) 00:13:04.816 fused_ordering(439) 00:13:04.816 fused_ordering(440) 00:13:04.816 fused_ordering(441) 00:13:04.816 fused_ordering(442) 00:13:04.816 fused_ordering(443) 00:13:04.816 fused_ordering(444) 00:13:04.816 fused_ordering(445) 00:13:04.816 fused_ordering(446) 00:13:04.816 fused_ordering(447) 00:13:04.816 fused_ordering(448) 00:13:04.816 fused_ordering(449) 00:13:04.816 fused_ordering(450) 00:13:04.816 fused_ordering(451) 00:13:04.816 fused_ordering(452) 00:13:04.816 fused_ordering(453) 00:13:04.816 fused_ordering(454) 00:13:04.816 fused_ordering(455) 00:13:04.816 fused_ordering(456) 00:13:04.816 fused_ordering(457) 00:13:04.816 fused_ordering(458) 00:13:04.816 fused_ordering(459) 00:13:04.816 fused_ordering(460) 00:13:04.816 fused_ordering(461) 00:13:04.816 fused_ordering(462) 00:13:04.816 fused_ordering(463) 00:13:04.816 fused_ordering(464) 00:13:04.816 fused_ordering(465) 00:13:04.816 fused_ordering(466) 00:13:04.816 fused_ordering(467) 00:13:04.816 fused_ordering(468) 00:13:04.816 fused_ordering(469) 00:13:04.816 fused_ordering(470) 00:13:04.816 fused_ordering(471) 00:13:04.816 fused_ordering(472) 00:13:04.816 fused_ordering(473) 00:13:04.816 fused_ordering(474) 00:13:04.816 fused_ordering(475) 00:13:04.816 fused_ordering(476) 00:13:04.816 fused_ordering(477) 00:13:04.816 fused_ordering(478) 00:13:04.816 fused_ordering(479) 00:13:04.816 fused_ordering(480) 00:13:04.816 fused_ordering(481) 00:13:04.816 fused_ordering(482) 00:13:04.816 fused_ordering(483) 00:13:04.816 fused_ordering(484) 00:13:04.816 fused_ordering(485) 00:13:04.816 fused_ordering(486) 00:13:04.816 fused_ordering(487) 00:13:04.816 fused_ordering(488) 00:13:04.816 fused_ordering(489) 00:13:04.816 fused_ordering(490) 00:13:04.816 fused_ordering(491) 00:13:04.816 fused_ordering(492) 00:13:04.816 fused_ordering(493) 00:13:04.816 fused_ordering(494) 00:13:04.816 fused_ordering(495) 00:13:04.816 fused_ordering(496) 00:13:04.816 fused_ordering(497) 00:13:04.816 fused_ordering(498) 00:13:04.816 fused_ordering(499) 00:13:04.816 fused_ordering(500) 00:13:04.816 fused_ordering(501) 00:13:04.816 fused_ordering(502) 00:13:04.816 fused_ordering(503) 00:13:04.816 fused_ordering(504) 00:13:04.816 fused_ordering(505) 00:13:04.816 fused_ordering(506) 00:13:04.816 fused_ordering(507) 00:13:04.816 fused_ordering(508) 00:13:04.816 fused_ordering(509) 00:13:04.816 fused_ordering(510) 00:13:04.816 fused_ordering(511) 00:13:04.816 fused_ordering(512) 00:13:04.816 fused_ordering(513) 00:13:04.816 fused_ordering(514) 00:13:04.816 fused_ordering(515) 00:13:04.816 fused_ordering(516) 00:13:04.816 fused_ordering(517) 00:13:04.816 fused_ordering(518) 00:13:04.816 fused_ordering(519) 00:13:04.816 fused_ordering(520) 00:13:04.816 fused_ordering(521) 00:13:04.816 fused_ordering(522) 00:13:04.816 fused_ordering(523) 00:13:04.816 fused_ordering(524) 00:13:04.816 fused_ordering(525) 00:13:04.816 fused_ordering(526) 00:13:04.816 fused_ordering(527) 00:13:04.816 fused_ordering(528) 00:13:04.816 fused_ordering(529) 00:13:04.816 fused_ordering(530) 00:13:04.816 fused_ordering(531) 00:13:04.816 fused_ordering(532) 00:13:04.816 fused_ordering(533) 00:13:04.816 fused_ordering(534) 00:13:04.816 fused_ordering(535) 00:13:04.816 fused_ordering(536) 00:13:04.816 fused_ordering(537) 00:13:04.816 fused_ordering(538) 00:13:04.816 fused_ordering(539) 00:13:04.816 fused_ordering(540) 00:13:04.816 fused_ordering(541) 00:13:04.816 fused_ordering(542) 00:13:04.816 fused_ordering(543) 00:13:04.816 fused_ordering(544) 00:13:04.816 fused_ordering(545) 00:13:04.816 fused_ordering(546) 00:13:04.816 fused_ordering(547) 00:13:04.816 fused_ordering(548) 00:13:04.816 fused_ordering(549) 00:13:04.816 fused_ordering(550) 00:13:04.816 fused_ordering(551) 00:13:04.816 fused_ordering(552) 00:13:04.816 fused_ordering(553) 00:13:04.816 fused_ordering(554) 00:13:04.816 fused_ordering(555) 00:13:04.816 fused_ordering(556) 00:13:04.816 fused_ordering(557) 00:13:04.816 fused_ordering(558) 00:13:04.816 fused_ordering(559) 00:13:04.816 fused_ordering(560) 00:13:04.816 fused_ordering(561) 00:13:04.816 fused_ordering(562) 00:13:04.816 fused_ordering(563) 00:13:04.816 fused_ordering(564) 00:13:04.816 fused_ordering(565) 00:13:04.816 fused_ordering(566) 00:13:04.816 fused_ordering(567) 00:13:04.816 fused_ordering(568) 00:13:04.816 fused_ordering(569) 00:13:04.816 fused_ordering(570) 00:13:04.816 fused_ordering(571) 00:13:04.816 fused_ordering(572) 00:13:04.816 fused_ordering(573) 00:13:04.816 fused_ordering(574) 00:13:04.816 fused_ordering(575) 00:13:04.816 fused_ordering(576) 00:13:04.816 fused_ordering(577) 00:13:04.816 fused_ordering(578) 00:13:04.816 fused_ordering(579) 00:13:04.816 fused_ordering(580) 00:13:04.816 fused_ordering(581) 00:13:04.816 fused_ordering(582) 00:13:04.816 fused_ordering(583) 00:13:04.816 fused_ordering(584) 00:13:04.816 fused_ordering(585) 00:13:04.816 fused_ordering(586) 00:13:04.816 fused_ordering(587) 00:13:04.816 fused_ordering(588) 00:13:04.816 fused_ordering(589) 00:13:04.816 fused_ordering(590) 00:13:04.816 fused_ordering(591) 00:13:04.816 fused_ordering(592) 00:13:04.816 fused_ordering(593) 00:13:04.816 fused_ordering(594) 00:13:04.816 fused_ordering(595) 00:13:04.816 fused_ordering(596) 00:13:04.816 fused_ordering(597) 00:13:04.816 fused_ordering(598) 00:13:04.816 fused_ordering(599) 00:13:04.816 fused_ordering(600) 00:13:04.816 fused_ordering(601) 00:13:04.816 fused_ordering(602) 00:13:04.816 fused_ordering(603) 00:13:04.816 fused_ordering(604) 00:13:04.816 fused_ordering(605) 00:13:04.816 fused_ordering(606) 00:13:04.816 fused_ordering(607) 00:13:04.816 fused_ordering(608) 00:13:04.816 fused_ordering(609) 00:13:04.816 fused_ordering(610) 00:13:04.816 fused_ordering(611) 00:13:04.816 fused_ordering(612) 00:13:04.816 fused_ordering(613) 00:13:04.816 fused_ordering(614) 00:13:04.816 fused_ordering(615) 00:13:05.385 fused_ordering(616) 00:13:05.385 fused_ordering(617) 00:13:05.385 fused_ordering(618) 00:13:05.385 fused_ordering(619) 00:13:05.385 fused_ordering(620) 00:13:05.385 fused_ordering(621) 00:13:05.385 fused_ordering(622) 00:13:05.385 fused_ordering(623) 00:13:05.385 fused_ordering(624) 00:13:05.385 fused_ordering(625) 00:13:05.385 fused_ordering(626) 00:13:05.385 fused_ordering(627) 00:13:05.385 fused_ordering(628) 00:13:05.385 fused_ordering(629) 00:13:05.385 fused_ordering(630) 00:13:05.385 fused_ordering(631) 00:13:05.385 fused_ordering(632) 00:13:05.385 fused_ordering(633) 00:13:05.385 fused_ordering(634) 00:13:05.385 fused_ordering(635) 00:13:05.385 fused_ordering(636) 00:13:05.385 fused_ordering(637) 00:13:05.385 fused_ordering(638) 00:13:05.385 fused_ordering(639) 00:13:05.385 fused_ordering(640) 00:13:05.385 fused_ordering(641) 00:13:05.385 fused_ordering(642) 00:13:05.385 fused_ordering(643) 00:13:05.385 fused_ordering(644) 00:13:05.385 fused_ordering(645) 00:13:05.385 fused_ordering(646) 00:13:05.385 fused_ordering(647) 00:13:05.385 fused_ordering(648) 00:13:05.385 fused_ordering(649) 00:13:05.385 fused_ordering(650) 00:13:05.385 fused_ordering(651) 00:13:05.385 fused_ordering(652) 00:13:05.385 fused_ordering(653) 00:13:05.385 fused_ordering(654) 00:13:05.385 fused_ordering(655) 00:13:05.385 fused_ordering(656) 00:13:05.385 fused_ordering(657) 00:13:05.385 fused_ordering(658) 00:13:05.385 fused_ordering(659) 00:13:05.385 fused_ordering(660) 00:13:05.385 fused_ordering(661) 00:13:05.385 fused_ordering(662) 00:13:05.385 fused_ordering(663) 00:13:05.385 fused_ordering(664) 00:13:05.385 fused_ordering(665) 00:13:05.385 fused_ordering(666) 00:13:05.385 fused_ordering(667) 00:13:05.385 fused_ordering(668) 00:13:05.385 fused_ordering(669) 00:13:05.385 fused_ordering(670) 00:13:05.385 fused_ordering(671) 00:13:05.385 fused_ordering(672) 00:13:05.385 fused_ordering(673) 00:13:05.385 fused_ordering(674) 00:13:05.385 fused_ordering(675) 00:13:05.385 fused_ordering(676) 00:13:05.385 fused_ordering(677) 00:13:05.385 fused_ordering(678) 00:13:05.385 fused_ordering(679) 00:13:05.385 fused_ordering(680) 00:13:05.385 fused_ordering(681) 00:13:05.385 fused_ordering(682) 00:13:05.385 fused_ordering(683) 00:13:05.385 fused_ordering(684) 00:13:05.385 fused_ordering(685) 00:13:05.385 fused_ordering(686) 00:13:05.385 fused_ordering(687) 00:13:05.385 fused_ordering(688) 00:13:05.385 fused_ordering(689) 00:13:05.385 fused_ordering(690) 00:13:05.385 fused_ordering(691) 00:13:05.385 fused_ordering(692) 00:13:05.385 fused_ordering(693) 00:13:05.385 fused_ordering(694) 00:13:05.385 fused_ordering(695) 00:13:05.385 fused_ordering(696) 00:13:05.385 fused_ordering(697) 00:13:05.385 fused_ordering(698) 00:13:05.385 fused_ordering(699) 00:13:05.385 fused_ordering(700) 00:13:05.385 fused_ordering(701) 00:13:05.385 fused_ordering(702) 00:13:05.385 fused_ordering(703) 00:13:05.385 fused_ordering(704) 00:13:05.385 fused_ordering(705) 00:13:05.385 fused_ordering(706) 00:13:05.385 fused_ordering(707) 00:13:05.385 fused_ordering(708) 00:13:05.385 fused_ordering(709) 00:13:05.385 fused_ordering(710) 00:13:05.385 fused_ordering(711) 00:13:05.385 fused_ordering(712) 00:13:05.385 fused_ordering(713) 00:13:05.385 fused_ordering(714) 00:13:05.385 fused_ordering(715) 00:13:05.385 fused_ordering(716) 00:13:05.385 fused_ordering(717) 00:13:05.385 fused_ordering(718) 00:13:05.385 fused_ordering(719) 00:13:05.385 fused_ordering(720) 00:13:05.385 fused_ordering(721) 00:13:05.385 fused_ordering(722) 00:13:05.385 fused_ordering(723) 00:13:05.385 fused_ordering(724) 00:13:05.385 fused_ordering(725) 00:13:05.385 fused_ordering(726) 00:13:05.385 fused_ordering(727) 00:13:05.385 fused_ordering(728) 00:13:05.385 fused_ordering(729) 00:13:05.385 fused_ordering(730) 00:13:05.385 fused_ordering(731) 00:13:05.386 fused_ordering(732) 00:13:05.386 fused_ordering(733) 00:13:05.386 fused_ordering(734) 00:13:05.386 fused_ordering(735) 00:13:05.386 fused_ordering(736) 00:13:05.386 fused_ordering(737) 00:13:05.386 fused_ordering(738) 00:13:05.386 fused_ordering(739) 00:13:05.386 fused_ordering(740) 00:13:05.386 fused_ordering(741) 00:13:05.386 fused_ordering(742) 00:13:05.386 fused_ordering(743) 00:13:05.386 fused_ordering(744) 00:13:05.386 fused_ordering(745) 00:13:05.386 fused_ordering(746) 00:13:05.386 fused_ordering(747) 00:13:05.386 fused_ordering(748) 00:13:05.386 fused_ordering(749) 00:13:05.386 fused_ordering(750) 00:13:05.386 fused_ordering(751) 00:13:05.386 fused_ordering(752) 00:13:05.386 fused_ordering(753) 00:13:05.386 fused_ordering(754) 00:13:05.386 fused_ordering(755) 00:13:05.386 fused_ordering(756) 00:13:05.386 fused_ordering(757) 00:13:05.386 fused_ordering(758) 00:13:05.386 fused_ordering(759) 00:13:05.386 fused_ordering(760) 00:13:05.386 fused_ordering(761) 00:13:05.386 fused_ordering(762) 00:13:05.386 fused_ordering(763) 00:13:05.386 fused_ordering(764) 00:13:05.386 fused_ordering(765) 00:13:05.386 fused_ordering(766) 00:13:05.386 fused_ordering(767) 00:13:05.386 fused_ordering(768) 00:13:05.386 fused_ordering(769) 00:13:05.386 fused_ordering(770) 00:13:05.386 fused_ordering(771) 00:13:05.386 fused_ordering(772) 00:13:05.386 fused_ordering(773) 00:13:05.386 fused_ordering(774) 00:13:05.386 fused_ordering(775) 00:13:05.386 fused_ordering(776) 00:13:05.386 fused_ordering(777) 00:13:05.386 fused_ordering(778) 00:13:05.386 fused_ordering(779) 00:13:05.386 fused_ordering(780) 00:13:05.386 fused_ordering(781) 00:13:05.386 fused_ordering(782) 00:13:05.386 fused_ordering(783) 00:13:05.386 fused_ordering(784) 00:13:05.386 fused_ordering(785) 00:13:05.386 fused_ordering(786) 00:13:05.386 fused_ordering(787) 00:13:05.386 fused_ordering(788) 00:13:05.386 fused_ordering(789) 00:13:05.386 fused_ordering(790) 00:13:05.386 fused_ordering(791) 00:13:05.386 fused_ordering(792) 00:13:05.386 fused_ordering(793) 00:13:05.386 fused_ordering(794) 00:13:05.386 fused_ordering(795) 00:13:05.386 fused_ordering(796) 00:13:05.386 fused_ordering(797) 00:13:05.386 fused_ordering(798) 00:13:05.386 fused_ordering(799) 00:13:05.386 fused_ordering(800) 00:13:05.386 fused_ordering(801) 00:13:05.386 fused_ordering(802) 00:13:05.386 fused_ordering(803) 00:13:05.386 fused_ordering(804) 00:13:05.386 fused_ordering(805) 00:13:05.386 fused_ordering(806) 00:13:05.386 fused_ordering(807) 00:13:05.386 fused_ordering(808) 00:13:05.386 fused_ordering(809) 00:13:05.386 fused_ordering(810) 00:13:05.386 fused_ordering(811) 00:13:05.386 fused_ordering(812) 00:13:05.386 fused_ordering(813) 00:13:05.386 fused_ordering(814) 00:13:05.386 fused_ordering(815) 00:13:05.386 fused_ordering(816) 00:13:05.386 fused_ordering(817) 00:13:05.386 fused_ordering(818) 00:13:05.386 fused_ordering(819) 00:13:05.386 fused_ordering(820) 00:13:05.644 fused_ordering(821) 00:13:05.645 fused_ordering(822) 00:13:05.645 fused_ordering(823) 00:13:05.645 fused_ordering(824) 00:13:05.645 fused_ordering(825) 00:13:05.645 fused_ordering(826) 00:13:05.645 fused_ordering(827) 00:13:05.645 fused_ordering(828) 00:13:05.645 fused_ordering(829) 00:13:05.645 fused_ordering(830) 00:13:05.645 fused_ordering(831) 00:13:05.645 fused_ordering(832) 00:13:05.645 fused_ordering(833) 00:13:05.645 fused_ordering(834) 00:13:05.645 fused_ordering(835) 00:13:05.645 fused_ordering(836) 00:13:05.645 fused_ordering(837) 00:13:05.645 fused_ordering(838) 00:13:05.645 fused_ordering(839) 00:13:05.645 fused_ordering(840) 00:13:05.645 fused_ordering(841) 00:13:05.645 fused_ordering(842) 00:13:05.645 fused_ordering(843) 00:13:05.645 fused_ordering(844) 00:13:05.645 fused_ordering(845) 00:13:05.645 fused_ordering(846) 00:13:05.645 fused_ordering(847) 00:13:05.645 fused_ordering(848) 00:13:05.645 fused_ordering(849) 00:13:05.645 fused_ordering(850) 00:13:05.645 fused_ordering(851) 00:13:05.645 fused_ordering(852) 00:13:05.645 fused_ordering(853) 00:13:05.645 fused_ordering(854) 00:13:05.645 fused_ordering(855) 00:13:05.645 fused_ordering(856) 00:13:05.645 fused_ordering(857) 00:13:05.645 fused_ordering(858) 00:13:05.645 fused_ordering(859) 00:13:05.645 fused_ordering(860) 00:13:05.645 fused_ordering(861) 00:13:05.645 fused_ordering(862) 00:13:05.645 fused_ordering(863) 00:13:05.645 fused_ordering(864) 00:13:05.645 fused_ordering(865) 00:13:05.645 fused_ordering(866) 00:13:05.645 fused_ordering(867) 00:13:05.645 fused_ordering(868) 00:13:05.645 fused_ordering(869) 00:13:05.645 fused_ordering(870) 00:13:05.645 fused_ordering(871) 00:13:05.645 fused_ordering(872) 00:13:05.645 fused_ordering(873) 00:13:05.645 fused_ordering(874) 00:13:05.645 fused_ordering(875) 00:13:05.645 fused_ordering(876) 00:13:05.645 fused_ordering(877) 00:13:05.645 fused_ordering(878) 00:13:05.645 fused_ordering(879) 00:13:05.645 fused_ordering(880) 00:13:05.645 fused_ordering(881) 00:13:05.645 fused_ordering(882) 00:13:05.645 fused_ordering(883) 00:13:05.645 fused_ordering(884) 00:13:05.645 fused_ordering(885) 00:13:05.645 fused_ordering(886) 00:13:05.645 fused_ordering(887) 00:13:05.645 fused_ordering(888) 00:13:05.645 fused_ordering(889) 00:13:05.645 fused_ordering(890) 00:13:05.645 fused_ordering(891) 00:13:05.645 fused_ordering(892) 00:13:05.645 fused_ordering(893) 00:13:05.645 fused_ordering(894) 00:13:05.645 fused_ordering(895) 00:13:05.645 fused_ordering(896) 00:13:05.645 fused_ordering(897) 00:13:05.645 fused_ordering(898) 00:13:05.645 fused_ordering(899) 00:13:05.645 fused_ordering(900) 00:13:05.645 fused_ordering(901) 00:13:05.645 fused_ordering(902) 00:13:05.645 fused_ordering(903) 00:13:05.645 fused_ordering(904) 00:13:05.645 fused_ordering(905) 00:13:05.645 fused_ordering(906) 00:13:05.645 fused_ordering(907) 00:13:05.645 fused_ordering(908) 00:13:05.645 fused_ordering(909) 00:13:05.645 fused_ordering(910) 00:13:05.645 fused_ordering(911) 00:13:05.645 fused_ordering(912) 00:13:05.645 fused_ordering(913) 00:13:05.645 fused_ordering(914) 00:13:05.645 fused_ordering(915) 00:13:05.645 fused_ordering(916) 00:13:05.645 fused_ordering(917) 00:13:05.645 fused_ordering(918) 00:13:05.645 fused_ordering(919) 00:13:05.645 fused_ordering(920) 00:13:05.645 fused_ordering(921) 00:13:05.645 fused_ordering(922) 00:13:05.645 fused_ordering(923) 00:13:05.645 fused_ordering(924) 00:13:05.645 fused_ordering(925) 00:13:05.645 fused_ordering(926) 00:13:05.645 fused_ordering(927) 00:13:05.645 fused_ordering(928) 00:13:05.645 fused_ordering(929) 00:13:05.645 fused_ordering(930) 00:13:05.645 fused_ordering(931) 00:13:05.645 fused_ordering(932) 00:13:05.645 fused_ordering(933) 00:13:05.645 fused_ordering(934) 00:13:05.645 fused_ordering(935) 00:13:05.645 fused_ordering(936) 00:13:05.645 fused_ordering(937) 00:13:05.645 fused_ordering(938) 00:13:05.645 fused_ordering(939) 00:13:05.645 fused_ordering(940) 00:13:05.645 fused_ordering(941) 00:13:05.645 fused_ordering(942) 00:13:05.645 fused_ordering(943) 00:13:05.645 fused_ordering(944) 00:13:05.645 fused_ordering(945) 00:13:05.645 fused_ordering(946) 00:13:05.645 fused_ordering(947) 00:13:05.645 fused_ordering(948) 00:13:05.645 fused_ordering(949) 00:13:05.645 fused_ordering(950) 00:13:05.645 fused_ordering(951) 00:13:05.645 fused_ordering(952) 00:13:05.645 fused_ordering(953) 00:13:05.645 fused_ordering(954) 00:13:05.645 fused_ordering(955) 00:13:05.645 fused_ordering(956) 00:13:05.645 fused_ordering(957) 00:13:05.645 fused_ordering(958) 00:13:05.645 fused_ordering(959) 00:13:05.645 fused_ordering(960) 00:13:05.645 fused_ordering(961) 00:13:05.645 fused_ordering(962) 00:13:05.645 fused_ordering(963) 00:13:05.645 fused_ordering(964) 00:13:05.645 fused_ordering(965) 00:13:05.645 fused_ordering(966) 00:13:05.645 fused_ordering(967) 00:13:05.645 fused_ordering(968) 00:13:05.645 fused_ordering(969) 00:13:05.645 fused_ordering(970) 00:13:05.645 fused_ordering(971) 00:13:05.645 fused_ordering(972) 00:13:05.645 fused_ordering(973) 00:13:05.645 fused_ordering(974) 00:13:05.645 fused_ordering(975) 00:13:05.645 fused_ordering(976) 00:13:05.645 fused_ordering(977) 00:13:05.645 fused_ordering(978) 00:13:05.645 fused_ordering(979) 00:13:05.645 fused_ordering(980) 00:13:05.645 fused_ordering(981) 00:13:05.645 fused_ordering(982) 00:13:05.645 fused_ordering(983) 00:13:05.645 fused_ordering(984) 00:13:05.645 fused_ordering(985) 00:13:05.645 fused_ordering(986) 00:13:05.645 fused_ordering(987) 00:13:05.645 fused_ordering(988) 00:13:05.645 fused_ordering(989) 00:13:05.645 fused_ordering(990) 00:13:05.645 fused_ordering(991) 00:13:05.645 fused_ordering(992) 00:13:05.645 fused_ordering(993) 00:13:05.645 fused_ordering(994) 00:13:05.645 fused_ordering(995) 00:13:05.645 fused_ordering(996) 00:13:05.645 fused_ordering(997) 00:13:05.645 fused_ordering(998) 00:13:05.645 fused_ordering(999) 00:13:05.645 fused_ordering(1000) 00:13:05.645 fused_ordering(1001) 00:13:05.645 fused_ordering(1002) 00:13:05.645 fused_ordering(1003) 00:13:05.645 fused_ordering(1004) 00:13:05.645 fused_ordering(1005) 00:13:05.645 fused_ordering(1006) 00:13:05.645 fused_ordering(1007) 00:13:05.645 fused_ordering(1008) 00:13:05.645 fused_ordering(1009) 00:13:05.645 fused_ordering(1010) 00:13:05.645 fused_ordering(1011) 00:13:05.645 fused_ordering(1012) 00:13:05.645 fused_ordering(1013) 00:13:05.645 fused_ordering(1014) 00:13:05.645 fused_ordering(1015) 00:13:05.645 fused_ordering(1016) 00:13:05.645 fused_ordering(1017) 00:13:05.645 fused_ordering(1018) 00:13:05.645 fused_ordering(1019) 00:13:05.645 fused_ordering(1020) 00:13:05.645 fused_ordering(1021) 00:13:05.645 fused_ordering(1022) 00:13:05.645 fused_ordering(1023) 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.645 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.645 rmmod nvme_tcp 00:13:05.645 rmmod nvme_fabrics 00:13:05.904 rmmod nvme_keyring 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2604842 ']' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2604842 ']' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2604842' 00:13:05.904 killing process with pid 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2604842 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.904 09:51:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.440 00:13:08.440 real 0m10.700s 00:13:08.440 user 0m4.826s 00:13:08.440 sys 0m5.945s 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.440 ************************************ 00:13:08.440 END TEST nvmf_fused_ordering 00:13:08.440 ************************************ 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.440 ************************************ 00:13:08.440 START TEST nvmf_ns_masking 00:13:08.440 ************************************ 00:13:08.440 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:08.440 * Looking for test storage... 00:13:08.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.441 --rc genhtml_branch_coverage=1 00:13:08.441 --rc genhtml_function_coverage=1 00:13:08.441 --rc genhtml_legend=1 00:13:08.441 --rc geninfo_all_blocks=1 00:13:08.441 --rc geninfo_unexecuted_blocks=1 00:13:08.441 00:13:08.441 ' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.441 --rc genhtml_branch_coverage=1 00:13:08.441 --rc genhtml_function_coverage=1 00:13:08.441 --rc genhtml_legend=1 00:13:08.441 --rc geninfo_all_blocks=1 00:13:08.441 --rc geninfo_unexecuted_blocks=1 00:13:08.441 00:13:08.441 ' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.441 --rc genhtml_branch_coverage=1 00:13:08.441 --rc genhtml_function_coverage=1 00:13:08.441 --rc genhtml_legend=1 00:13:08.441 --rc geninfo_all_blocks=1 00:13:08.441 --rc geninfo_unexecuted_blocks=1 00:13:08.441 00:13:08.441 ' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.441 --rc genhtml_branch_coverage=1 00:13:08.441 --rc genhtml_function_coverage=1 00:13:08.441 --rc genhtml_legend=1 00:13:08.441 --rc geninfo_all_blocks=1 00:13:08.441 --rc geninfo_unexecuted_blocks=1 00:13:08.441 00:13:08.441 ' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.441 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=49ab1fe5-0b6e-4b30-b845-8e4a3481ec43 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=59f80e07-26cb-47e1-b108-ed97cc397b93 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bc5c7e52-8fed-419a-88a3-45891287e87e 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.442 09:51:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:15.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.010 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:15.010 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:15.011 Found net devices under 0000:86:00.0: cvl_0_0 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:15.011 Found net devices under 0000:86:00.1: cvl_0_1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:13:15.011 00:13:15.011 --- 10.0.0.2 ping statistics --- 00:13:15.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.011 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:15.011 00:13:15.011 --- 10.0.0.1 ping statistics --- 00:13:15.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.011 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2608842 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2608842 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2608842 ']' 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.011 09:51:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.011 [2024-11-20 09:51:47.893295] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:15.011 [2024-11-20 09:51:47.893337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.011 [2024-11-20 09:51:47.968440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.011 [2024-11-20 09:51:48.008563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.011 [2024-11-20 09:51:48.008596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.011 [2024-11-20 09:51:48.008603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.011 [2024-11-20 09:51:48.008608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.012 [2024-11-20 09:51:48.008613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.012 [2024-11-20 09:51:48.009191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.012 [2024-11-20 09:51:48.307250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:15.012 Malloc1 00:13:15.012 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:15.270 Malloc2 00:13:15.270 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.529 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:15.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.789 [2024-11-20 09:51:49.290281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:15.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc5c7e52-8fed-419a-88a3-45891287e87e -a 10.0.0.2 -s 4420 -i 4 00:13:16.047 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.047 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.047 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.047 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.047 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.945 [ 0]:0x1 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.945 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5836bf90d6ff4ce19668ba0c655d4d70 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5836bf90d6ff4ce19668ba0c655d4d70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.204 [ 0]:0x1 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:18.204 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5836bf90d6ff4ce19668ba0c655d4d70 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5836bf90d6ff4ce19668ba0c655d4d70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:18.462 [ 1]:0x2 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:18.462 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.719 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.719 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:18.976 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:18.976 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc5c7e52-8fed-419a-88a3-45891287e87e -a 10.0.0.2 -s 4420 -i 4 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:19.236 09:51:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.139 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.139 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.139 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.140 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:21.140 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.140 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:21.140 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:21.140 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.399 [ 0]:0x2 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.399 09:51:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.658 [ 0]:0x1 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5836bf90d6ff4ce19668ba0c655d4d70 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5836bf90d6ff4ce19668ba0c655d4d70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.658 [ 1]:0x2 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.658 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:21.916 [ 0]:0x2 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:21.916 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.175 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.175 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:22.175 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc5c7e52-8fed-419a-88a3-45891287e87e -a 10.0.0.2 -s 4420 -i 4 00:13:22.433 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:22.433 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:22.434 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.434 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:22.434 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:22.434 09:51:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:24.334 09:51:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:24.593 [ 0]:0x1 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5836bf90d6ff4ce19668ba0c655d4d70 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5836bf90d6ff4ce19668ba0c655d4d70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:24.593 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.594 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:24.594 [ 1]:0x2 00:13:24.594 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:24.594 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.853 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.111 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.112 [ 0]:0x2 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:25.112 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:25.370 [2024-11-20 09:51:58.761582] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:25.370 request: 00:13:25.370 { 00:13:25.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.370 "nsid": 2, 00:13:25.370 "host": "nqn.2016-06.io.spdk:host1", 00:13:25.370 "method": "nvmf_ns_remove_host", 00:13:25.370 "req_id": 1 00:13:25.370 } 00:13:25.370 Got JSON-RPC error response 00:13:25.370 response: 00:13:25.370 { 00:13:25.370 "code": -32602, 00:13:25.370 "message": "Invalid parameters" 00:13:25.370 } 00:13:25.370 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:25.370 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.370 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.370 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.370 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.371 [ 0]:0x2 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.371 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.629 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=581e92f3c4ea412c89392e71d7701d74 00:13:25.629 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 581e92f3c4ea412c89392e71d7701d74 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.629 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:25.629 09:51:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2610842 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2610842 /var/tmp/host.sock 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2610842 ']' 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:25.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.630 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.630 [2024-11-20 09:51:59.147312] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:25.630 [2024-11-20 09:51:59.147354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610842 ] 00:13:25.889 [2024-11-20 09:51:59.219365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.889 [2024-11-20 09:51:59.260565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.147 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.147 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:26.147 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.147 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:26.406 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 49ab1fe5-0b6e-4b30-b845-8e4a3481ec43 00:13:26.406 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:26.406 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 49AB1FE50B6E4B30B8458E4A3481EC43 -i 00:13:26.665 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 59f80e07-26cb-47e1-b108-ed97cc397b93 00:13:26.665 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:26.665 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 59F80E0726CB47E1B108ED97CC397B93 -i 00:13:26.924 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:26.924 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:27.183 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:27.183 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:27.441 nvme0n1 00:13:27.441 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:27.441 09:52:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:27.699 nvme1n2 00:13:27.699 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:27.699 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:27.699 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:27.699 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:27.956 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:27.956 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:27.956 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:27.956 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:27.956 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:28.213 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 49ab1fe5-0b6e-4b30-b845-8e4a3481ec43 == \4\9\a\b\1\f\e\5\-\0\b\6\e\-\4\b\3\0\-\b\8\4\5\-\8\e\4\a\3\4\8\1\e\c\4\3 ]] 00:13:28.213 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:28.213 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:28.213 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:28.470 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 59f80e07-26cb-47e1-b108-ed97cc397b93 == \5\9\f\8\0\e\0\7\-\2\6\c\b\-\4\7\e\1\-\b\1\0\8\-\e\d\9\7\c\c\3\9\7\b\9\3 ]] 00:13:28.471 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 49ab1fe5-0b6e-4b30-b845-8e4a3481ec43 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49AB1FE50B6E4B30B8458E4A3481EC43 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49AB1FE50B6E4B30B8458E4A3481EC43 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:28.729 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49AB1FE50B6E4B30B8458E4A3481EC43 00:13:28.988 [2024-11-20 09:52:02.455674] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:28.988 [2024-11-20 09:52:02.455705] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:28.988 [2024-11-20 09:52:02.455713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.988 request: 00:13:28.988 { 00:13:28.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.988 "namespace": { 00:13:28.988 "bdev_name": "invalid", 00:13:28.988 "nsid": 1, 00:13:28.988 "nguid": "49AB1FE50B6E4B30B8458E4A3481EC43", 00:13:28.988 "no_auto_visible": false 00:13:28.988 }, 00:13:28.988 "method": "nvmf_subsystem_add_ns", 00:13:28.988 "req_id": 1 00:13:28.988 } 00:13:28.988 Got JSON-RPC error response 00:13:28.988 response: 00:13:28.988 { 00:13:28.988 "code": -32602, 00:13:28.988 "message": "Invalid parameters" 00:13:28.988 } 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 49ab1fe5-0b6e-4b30-b845-8e4a3481ec43 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:28.988 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 49AB1FE50B6E4B30B8458E4A3481EC43 -i 00:13:29.247 09:52:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:31.176 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:31.176 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:31.176 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:31.434 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2610842 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2610842 ']' 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2610842 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610842 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610842' 00:13:31.435 killing process with pid 2610842 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2610842 00:13:31.435 09:52:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2610842 00:13:31.693 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.951 rmmod nvme_tcp 00:13:31.951 rmmod nvme_fabrics 00:13:31.951 rmmod nvme_keyring 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2608842 ']' 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2608842 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2608842 ']' 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2608842 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.951 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2608842 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2608842' 00:13:32.210 killing process with pid 2608842 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2608842 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2608842 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.210 09:52:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.836 00:13:34.836 real 0m26.226s 00:13:34.836 user 0m31.102s 00:13:34.836 sys 0m7.162s 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.836 ************************************ 00:13:34.836 END TEST nvmf_ns_masking 00:13:34.836 ************************************ 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.836 ************************************ 00:13:34.836 START TEST nvmf_nvme_cli 00:13:34.836 ************************************ 00:13:34.836 09:52:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:34.836 * Looking for test storage... 00:13:34.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.836 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.837 --rc genhtml_branch_coverage=1 00:13:34.837 --rc genhtml_function_coverage=1 00:13:34.837 --rc genhtml_legend=1 00:13:34.837 --rc geninfo_all_blocks=1 00:13:34.837 --rc geninfo_unexecuted_blocks=1 00:13:34.837 00:13:34.837 ' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.837 --rc genhtml_branch_coverage=1 00:13:34.837 --rc genhtml_function_coverage=1 00:13:34.837 --rc genhtml_legend=1 00:13:34.837 --rc geninfo_all_blocks=1 00:13:34.837 --rc geninfo_unexecuted_blocks=1 00:13:34.837 00:13:34.837 ' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.837 --rc genhtml_branch_coverage=1 00:13:34.837 --rc genhtml_function_coverage=1 00:13:34.837 --rc genhtml_legend=1 00:13:34.837 --rc geninfo_all_blocks=1 00:13:34.837 --rc geninfo_unexecuted_blocks=1 00:13:34.837 00:13:34.837 ' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.837 --rc genhtml_branch_coverage=1 00:13:34.837 --rc genhtml_function_coverage=1 00:13:34.837 --rc genhtml_legend=1 00:13:34.837 --rc geninfo_all_blocks=1 00:13:34.837 --rc geninfo_unexecuted_blocks=1 00:13:34.837 00:13:34.837 ' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:34.837 09:52:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.406 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:41.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:41.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:41.407 Found net devices under 0000:86:00.0: cvl_0_0 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:41.407 Found net devices under 0000:86:00.1: cvl_0_1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.407 09:52:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:13:41.407 00:13:41.407 --- 10.0.0.2 ping statistics --- 00:13:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.407 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:13:41.407 00:13:41.407 --- 10.0.0.1 ping statistics --- 00:13:41.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.407 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.407 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2615458 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2615458 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2615458 ']' 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 [2024-11-20 09:52:14.133698] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:41.408 [2024-11-20 09:52:14.133750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.408 [2024-11-20 09:52:14.216517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.408 [2024-11-20 09:52:14.261515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.408 [2024-11-20 09:52:14.261552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.408 [2024-11-20 09:52:14.261559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.408 [2024-11-20 09:52:14.261564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.408 [2024-11-20 09:52:14.261569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.408 [2024-11-20 09:52:14.263020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.408 [2024-11-20 09:52:14.263148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.408 [2024-11-20 09:52:14.263278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.408 [2024-11-20 09:52:14.263279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 [2024-11-20 09:52:14.407902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 Malloc0 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 Malloc1 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 [2024-11-20 09:52:14.495816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:41.408 00:13:41.408 Discovery Log Number of Records 2, Generation counter 2 00:13:41.408 =====Discovery Log Entry 0====== 00:13:41.408 trtype: tcp 00:13:41.408 adrfam: ipv4 00:13:41.408 subtype: current discovery subsystem 00:13:41.408 treq: not required 00:13:41.408 portid: 0 00:13:41.408 trsvcid: 4420 00:13:41.408 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:41.408 traddr: 10.0.0.2 00:13:41.408 eflags: explicit discovery connections, duplicate discovery information 00:13:41.408 sectype: none 00:13:41.408 =====Discovery Log Entry 1====== 00:13:41.408 trtype: tcp 00:13:41.408 adrfam: ipv4 00:13:41.408 subtype: nvme subsystem 00:13:41.408 treq: not required 00:13:41.408 portid: 0 00:13:41.408 trsvcid: 4420 00:13:41.408 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:41.408 traddr: 10.0.0.2 00:13:41.408 eflags: none 00:13:41.408 sectype: none 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:41.408 09:52:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:42.344 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:44.246 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:44.246 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:44.246 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:44.504 /dev/nvme0n2 ]] 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.504 09:52:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:44.763 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.022 rmmod nvme_tcp 00:13:45.022 rmmod nvme_fabrics 00:13:45.022 rmmod nvme_keyring 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2615458 ']' 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2615458 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2615458 ']' 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2615458 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2615458 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.022 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.023 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2615458' 00:13:45.023 killing process with pid 2615458 00:13:45.023 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2615458 00:13:45.023 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2615458 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.282 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.186 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.445 00:13:47.445 real 0m12.853s 00:13:47.445 user 0m19.420s 00:13:47.445 sys 0m5.064s 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:47.445 ************************************ 00:13:47.445 END TEST nvmf_nvme_cli 00:13:47.445 ************************************ 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:47.445 ************************************ 00:13:47.445 START TEST nvmf_vfio_user 00:13:47.445 ************************************ 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:47.445 * Looking for test storage... 00:13:47.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.445 09:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:47.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.445 --rc genhtml_branch_coverage=1 00:13:47.445 --rc genhtml_function_coverage=1 00:13:47.445 --rc genhtml_legend=1 00:13:47.445 --rc geninfo_all_blocks=1 00:13:47.445 --rc geninfo_unexecuted_blocks=1 00:13:47.445 00:13:47.445 ' 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:47.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.445 --rc genhtml_branch_coverage=1 00:13:47.445 --rc genhtml_function_coverage=1 00:13:47.445 --rc genhtml_legend=1 00:13:47.445 --rc geninfo_all_blocks=1 00:13:47.445 --rc geninfo_unexecuted_blocks=1 00:13:47.445 00:13:47.445 ' 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:47.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.445 --rc genhtml_branch_coverage=1 00:13:47.445 --rc genhtml_function_coverage=1 00:13:47.445 --rc genhtml_legend=1 00:13:47.445 --rc geninfo_all_blocks=1 00:13:47.445 --rc geninfo_unexecuted_blocks=1 00:13:47.445 00:13:47.445 ' 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:47.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.445 --rc genhtml_branch_coverage=1 00:13:47.445 --rc genhtml_function_coverage=1 00:13:47.445 --rc genhtml_legend=1 00:13:47.445 --rc geninfo_all_blocks=1 00:13:47.445 --rc geninfo_unexecuted_blocks=1 00:13:47.445 00:13:47.445 ' 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.445 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.704 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2616698 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2616698' 00:13:47.705 Process pid: 2616698 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2616698 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2616698 ']' 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.705 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 [2024-11-20 09:52:21.101532] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:47.705 [2024-11-20 09:52:21.101578] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.705 [2024-11-20 09:52:21.175384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.705 [2024-11-20 09:52:21.217146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.705 [2024-11-20 09:52:21.217182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.705 [2024-11-20 09:52:21.217189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.705 [2024-11-20 09:52:21.217196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.705 [2024-11-20 09:52:21.217205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.705 [2024-11-20 09:52:21.218678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.705 [2024-11-20 09:52:21.218786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.705 [2024-11-20 09:52:21.218883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.705 [2024-11-20 09:52:21.218885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.964 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.964 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:47.964 09:52:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:48.901 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:49.160 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:49.160 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:49.160 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.160 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:49.160 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:49.418 Malloc1 00:13:49.418 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:49.418 09:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:49.677 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:49.936 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.936 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:49.936 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:49.936 Malloc2 00:13:50.195 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:50.195 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:50.453 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:50.713 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:50.713 [2024-11-20 09:52:24.154929] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:13:50.713 [2024-11-20 09:52:24.154974] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617342 ] 00:13:50.713 [2024-11-20 09:52:24.193655] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:50.713 [2024-11-20 09:52:24.202475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.713 [2024-11-20 09:52:24.202497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f23ffa74000 00:13:50.713 [2024-11-20 09:52:24.203482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.204480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.205488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.206496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.207500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.208508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.209512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.210513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.713 [2024-11-20 09:52:24.211527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.713 [2024-11-20 09:52:24.211536] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f23ffa69000 00:13:50.713 [2024-11-20 09:52:24.212452] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.713 [2024-11-20 09:52:24.221888] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:50.713 [2024-11-20 09:52:24.221909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:50.713 [2024-11-20 09:52:24.226629] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:50.713 [2024-11-20 09:52:24.226664] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:50.713 [2024-11-20 09:52:24.226731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:50.713 [2024-11-20 09:52:24.226745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:50.713 [2024-11-20 09:52:24.226750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:50.713 [2024-11-20 09:52:24.227631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:50.713 [2024-11-20 09:52:24.227639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:50.713 [2024-11-20 09:52:24.227645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:50.713 [2024-11-20 09:52:24.228632] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:50.713 [2024-11-20 09:52:24.228640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:50.713 [2024-11-20 09:52:24.228646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.229635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:50.713 [2024-11-20 09:52:24.229642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.230659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:50.713 [2024-11-20 09:52:24.230668] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:50.713 [2024-11-20 09:52:24.230672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.230679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.230785] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:50.713 [2024-11-20 09:52:24.230790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.230797] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:50.713 [2024-11-20 09:52:24.231653] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:50.713 [2024-11-20 09:52:24.232655] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:50.713 [2024-11-20 09:52:24.233659] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:50.713 [2024-11-20 09:52:24.234661] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.713 [2024-11-20 09:52:24.234765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:50.713 [2024-11-20 09:52:24.235675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:50.713 [2024-11-20 09:52:24.235683] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:50.713 [2024-11-20 09:52:24.235687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:50.713 [2024-11-20 09:52:24.235704] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:50.713 [2024-11-20 09:52:24.235710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:50.713 [2024-11-20 09:52:24.235724] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.713 [2024-11-20 09:52:24.235729] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.713 [2024-11-20 09:52:24.235732] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.713 [2024-11-20 09:52:24.235745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.713 [2024-11-20 09:52:24.235792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:50.713 [2024-11-20 09:52:24.235801] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:50.713 [2024-11-20 09:52:24.235805] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:50.713 [2024-11-20 09:52:24.235808] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:50.713 [2024-11-20 09:52:24.235812] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:50.713 [2024-11-20 09:52:24.235818] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:50.713 [2024-11-20 09:52:24.235822] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:50.713 [2024-11-20 09:52:24.235826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:50.713 [2024-11-20 09:52:24.235834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.235856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.235866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.714 [2024-11-20 09:52:24.235874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.714 [2024-11-20 09:52:24.235881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.714 [2024-11-20 09:52:24.235888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.714 [2024-11-20 09:52:24.235892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.235915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.235921] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:50.714 [2024-11-20 09:52:24.235926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.235944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236017] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:50.714 [2024-11-20 09:52:24.236020] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:50.714 [2024-11-20 09:52:24.236023] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236051] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:50.714 [2024-11-20 09:52:24.236058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236072] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.714 [2024-11-20 09:52:24.236075] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.714 [2024-11-20 09:52:24.236078] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236128] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.714 [2024-11-20 09:52:24.236132] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.714 [2024-11-20 09:52:24.236135] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236191] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:50.714 [2024-11-20 09:52:24.236194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:50.714 [2024-11-20 09:52:24.236199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:50.714 [2024-11-20 09:52:24.236218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:50.714 [2024-11-20 09:52:24.236298] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:50.714 [2024-11-20 09:52:24.236301] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:50.714 [2024-11-20 09:52:24.236304] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:50.714 [2024-11-20 09:52:24.236307] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:50.714 [2024-11-20 09:52:24.236312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:50.714 [2024-11-20 09:52:24.236319] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:50.714 [2024-11-20 09:52:24.236322] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:50.714 [2024-11-20 09:52:24.236325] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:50.714 [2024-11-20 09:52:24.236340] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.714 [2024-11-20 09:52:24.236343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236354] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:50.714 [2024-11-20 09:52:24.236358] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:50.714 [2024-11-20 09:52:24.236361] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.714 [2024-11-20 09:52:24.236366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:50.714 [2024-11-20 09:52:24.236372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:50.714 [2024-11-20 09:52:24.236398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:50.714 ===================================================== 00:13:50.714 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.714 ===================================================== 00:13:50.714 Controller Capabilities/Features 00:13:50.714 ================================ 00:13:50.714 Vendor ID: 4e58 00:13:50.714 Subsystem Vendor ID: 4e58 00:13:50.714 Serial Number: SPDK1 00:13:50.714 Model Number: SPDK bdev Controller 00:13:50.714 Firmware Version: 25.01 00:13:50.714 Recommended Arb Burst: 6 00:13:50.714 IEEE OUI Identifier: 8d 6b 50 00:13:50.714 Multi-path I/O 00:13:50.714 May have multiple subsystem ports: Yes 00:13:50.714 May have multiple controllers: Yes 00:13:50.714 Associated with SR-IOV VF: No 00:13:50.714 Max Data Transfer Size: 131072 00:13:50.714 Max Number of Namespaces: 32 00:13:50.714 Max Number of I/O Queues: 127 00:13:50.714 NVMe Specification Version (VS): 1.3 00:13:50.714 NVMe Specification Version (Identify): 1.3 00:13:50.714 Maximum Queue Entries: 256 00:13:50.714 Contiguous Queues Required: Yes 00:13:50.714 Arbitration Mechanisms Supported 00:13:50.714 Weighted Round Robin: Not Supported 00:13:50.714 Vendor Specific: Not Supported 00:13:50.714 Reset Timeout: 15000 ms 00:13:50.714 Doorbell Stride: 4 bytes 00:13:50.714 NVM Subsystem Reset: Not Supported 00:13:50.714 Command Sets Supported 00:13:50.714 NVM Command Set: Supported 00:13:50.714 Boot Partition: Not Supported 00:13:50.714 Memory Page Size Minimum: 4096 bytes 00:13:50.714 Memory Page Size Maximum: 4096 bytes 00:13:50.714 Persistent Memory Region: Not Supported 00:13:50.714 Optional Asynchronous Events Supported 00:13:50.714 Namespace Attribute Notices: Supported 00:13:50.714 Firmware Activation Notices: Not Supported 00:13:50.714 ANA Change Notices: Not Supported 00:13:50.714 PLE Aggregate Log Change Notices: Not Supported 00:13:50.714 LBA Status Info Alert Notices: Not Supported 00:13:50.714 EGE Aggregate Log Change Notices: Not Supported 00:13:50.714 Normal NVM Subsystem Shutdown event: Not Supported 00:13:50.714 Zone Descriptor Change Notices: Not Supported 00:13:50.714 Discovery Log Change Notices: Not Supported 00:13:50.714 Controller Attributes 00:13:50.714 128-bit Host Identifier: Supported 00:13:50.714 Non-Operational Permissive Mode: Not Supported 00:13:50.714 NVM Sets: Not Supported 00:13:50.714 Read Recovery Levels: Not Supported 00:13:50.714 Endurance Groups: Not Supported 00:13:50.714 Predictable Latency Mode: Not Supported 00:13:50.714 Traffic Based Keep ALive: Not Supported 00:13:50.714 Namespace Granularity: Not Supported 00:13:50.714 SQ Associations: Not Supported 00:13:50.714 UUID List: Not Supported 00:13:50.714 Multi-Domain Subsystem: Not Supported 00:13:50.714 Fixed Capacity Management: Not Supported 00:13:50.714 Variable Capacity Management: Not Supported 00:13:50.714 Delete Endurance Group: Not Supported 00:13:50.714 Delete NVM Set: Not Supported 00:13:50.714 Extended LBA Formats Supported: Not Supported 00:13:50.714 Flexible Data Placement Supported: Not Supported 00:13:50.714 00:13:50.714 Controller Memory Buffer Support 00:13:50.714 ================================ 00:13:50.714 Supported: No 00:13:50.714 00:13:50.714 Persistent Memory Region Support 00:13:50.714 ================================ 00:13:50.714 Supported: No 00:13:50.714 00:13:50.714 Admin Command Set Attributes 00:13:50.714 ============================ 00:13:50.715 Security Send/Receive: Not Supported 00:13:50.715 Format NVM: Not Supported 00:13:50.715 Firmware Activate/Download: Not Supported 00:13:50.715 Namespace Management: Not Supported 00:13:50.715 Device Self-Test: Not Supported 00:13:50.715 Directives: Not Supported 00:13:50.715 NVMe-MI: Not Supported 00:13:50.715 Virtualization Management: Not Supported 00:13:50.715 Doorbell Buffer Config: Not Supported 00:13:50.715 Get LBA Status Capability: Not Supported 00:13:50.715 Command & Feature Lockdown Capability: Not Supported 00:13:50.715 Abort Command Limit: 4 00:13:50.715 Async Event Request Limit: 4 00:13:50.715 Number of Firmware Slots: N/A 00:13:50.715 Firmware Slot 1 Read-Only: N/A 00:13:50.715 Firmware Activation Without Reset: N/A 00:13:50.715 Multiple Update Detection Support: N/A 00:13:50.715 Firmware Update Granularity: No Information Provided 00:13:50.715 Per-Namespace SMART Log: No 00:13:50.715 Asymmetric Namespace Access Log Page: Not Supported 00:13:50.715 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:50.715 Command Effects Log Page: Supported 00:13:50.715 Get Log Page Extended Data: Supported 00:13:50.715 Telemetry Log Pages: Not Supported 00:13:50.715 Persistent Event Log Pages: Not Supported 00:13:50.715 Supported Log Pages Log Page: May Support 00:13:50.715 Commands Supported & Effects Log Page: Not Supported 00:13:50.715 Feature Identifiers & Effects Log Page:May Support 00:13:50.715 NVMe-MI Commands & Effects Log Page: May Support 00:13:50.715 Data Area 4 for Telemetry Log: Not Supported 00:13:50.715 Error Log Page Entries Supported: 128 00:13:50.715 Keep Alive: Supported 00:13:50.715 Keep Alive Granularity: 10000 ms 00:13:50.715 00:13:50.715 NVM Command Set Attributes 00:13:50.715 ========================== 00:13:50.715 Submission Queue Entry Size 00:13:50.715 Max: 64 00:13:50.715 Min: 64 00:13:50.715 Completion Queue Entry Size 00:13:50.715 Max: 16 00:13:50.715 Min: 16 00:13:50.715 Number of Namespaces: 32 00:13:50.715 Compare Command: Supported 00:13:50.715 Write Uncorrectable Command: Not Supported 00:13:50.715 Dataset Management Command: Supported 00:13:50.715 Write Zeroes Command: Supported 00:13:50.715 Set Features Save Field: Not Supported 00:13:50.715 Reservations: Not Supported 00:13:50.715 Timestamp: Not Supported 00:13:50.715 Copy: Supported 00:13:50.715 Volatile Write Cache: Present 00:13:50.715 Atomic Write Unit (Normal): 1 00:13:50.715 Atomic Write Unit (PFail): 1 00:13:50.715 Atomic Compare & Write Unit: 1 00:13:50.715 Fused Compare & Write: Supported 00:13:50.715 Scatter-Gather List 00:13:50.715 SGL Command Set: Supported (Dword aligned) 00:13:50.715 SGL Keyed: Not Supported 00:13:50.715 SGL Bit Bucket Descriptor: Not Supported 00:13:50.715 SGL Metadata Pointer: Not Supported 00:13:50.715 Oversized SGL: Not Supported 00:13:50.715 SGL Metadata Address: Not Supported 00:13:50.715 SGL Offset: Not Supported 00:13:50.715 Transport SGL Data Block: Not Supported 00:13:50.715 Replay Protected Memory Block: Not Supported 00:13:50.715 00:13:50.715 Firmware Slot Information 00:13:50.715 ========================= 00:13:50.715 Active slot: 1 00:13:50.715 Slot 1 Firmware Revision: 25.01 00:13:50.715 00:13:50.715 00:13:50.715 Commands Supported and Effects 00:13:50.715 ============================== 00:13:50.715 Admin Commands 00:13:50.715 -------------- 00:13:50.715 Get Log Page (02h): Supported 00:13:50.715 Identify (06h): Supported 00:13:50.715 Abort (08h): Supported 00:13:50.715 Set Features (09h): Supported 00:13:50.715 Get Features (0Ah): Supported 00:13:50.715 Asynchronous Event Request (0Ch): Supported 00:13:50.715 Keep Alive (18h): Supported 00:13:50.715 I/O Commands 00:13:50.715 ------------ 00:13:50.715 Flush (00h): Supported LBA-Change 00:13:50.715 Write (01h): Supported LBA-Change 00:13:50.715 Read (02h): Supported 00:13:50.715 Compare (05h): Supported 00:13:50.715 Write Zeroes (08h): Supported LBA-Change 00:13:50.715 Dataset Management (09h): Supported LBA-Change 00:13:50.715 Copy (19h): Supported LBA-Change 00:13:50.715 00:13:50.715 Error Log 00:13:50.715 ========= 00:13:50.715 00:13:50.715 Arbitration 00:13:50.715 =========== 00:13:50.715 Arbitration Burst: 1 00:13:50.715 00:13:50.715 Power Management 00:13:50.715 ================ 00:13:50.715 Number of Power States: 1 00:13:50.715 Current Power State: Power State #0 00:13:50.715 Power State #0: 00:13:50.715 Max Power: 0.00 W 00:13:50.715 Non-Operational State: Operational 00:13:50.715 Entry Latency: Not Reported 00:13:50.715 Exit Latency: Not Reported 00:13:50.715 Relative Read Throughput: 0 00:13:50.715 Relative Read Latency: 0 00:13:50.715 Relative Write Throughput: 0 00:13:50.715 Relative Write Latency: 0 00:13:50.715 Idle Power: Not Reported 00:13:50.715 Active Power: Not Reported 00:13:50.715 Non-Operational Permissive Mode: Not Supported 00:13:50.715 00:13:50.715 Health Information 00:13:50.715 ================== 00:13:50.715 Critical Warnings: 00:13:50.715 Available Spare Space: OK 00:13:50.715 Temperature: OK 00:13:50.715 Device Reliability: OK 00:13:50.715 Read Only: No 00:13:50.715 Volatile Memory Backup: OK 00:13:50.715 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:50.715 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:50.715 Available Spare: 0% 00:13:50.715 Available Sp[2024-11-20 09:52:24.236481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:50.715 [2024-11-20 09:52:24.236490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:50.715 [2024-11-20 09:52:24.236513] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:50.715 [2024-11-20 09:52:24.236522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.715 [2024-11-20 09:52:24.236528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.715 [2024-11-20 09:52:24.236533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.715 [2024-11-20 09:52:24.236538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.715 [2024-11-20 09:52:24.240208] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:50.715 [2024-11-20 09:52:24.240219] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:50.715 [2024-11-20 09:52:24.240714] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.715 [2024-11-20 09:52:24.240764] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:50.715 [2024-11-20 09:52:24.240769] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:50.715 [2024-11-20 09:52:24.241711] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:50.715 [2024-11-20 09:52:24.241721] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:50.715 [2024-11-20 09:52:24.241768] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:50.715 [2024-11-20 09:52:24.242745] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.715 are Threshold: 0% 00:13:50.715 Life Percentage Used: 0% 00:13:50.715 Data Units Read: 0 00:13:50.715 Data Units Written: 0 00:13:50.715 Host Read Commands: 0 00:13:50.715 Host Write Commands: 0 00:13:50.715 Controller Busy Time: 0 minutes 00:13:50.715 Power Cycles: 0 00:13:50.715 Power On Hours: 0 hours 00:13:50.715 Unsafe Shutdowns: 0 00:13:50.715 Unrecoverable Media Errors: 0 00:13:50.715 Lifetime Error Log Entries: 0 00:13:50.715 Warning Temperature Time: 0 minutes 00:13:50.715 Critical Temperature Time: 0 minutes 00:13:50.715 00:13:50.715 Number of Queues 00:13:50.715 ================ 00:13:50.715 Number of I/O Submission Queues: 127 00:13:50.715 Number of I/O Completion Queues: 127 00:13:50.715 00:13:50.715 Active Namespaces 00:13:50.715 ================= 00:13:50.715 Namespace ID:1 00:13:50.715 Error Recovery Timeout: Unlimited 00:13:50.715 Command Set Identifier: NVM (00h) 00:13:50.715 Deallocate: Supported 00:13:50.715 Deallocated/Unwritten Error: Not Supported 00:13:50.715 Deallocated Read Value: Unknown 00:13:50.715 Deallocate in Write Zeroes: Not Supported 00:13:50.715 Deallocated Guard Field: 0xFFFF 00:13:50.715 Flush: Supported 00:13:50.715 Reservation: Supported 00:13:50.715 Namespace Sharing Capabilities: Multiple Controllers 00:13:50.715 Size (in LBAs): 131072 (0GiB) 00:13:50.715 Capacity (in LBAs): 131072 (0GiB) 00:13:50.715 Utilization (in LBAs): 131072 (0GiB) 00:13:50.715 NGUID: 5354282CAD28429B8A144A33A7027367 00:13:50.715 UUID: 5354282c-ad28-429b-8a14-4a33a7027367 00:13:50.715 Thin Provisioning: Not Supported 00:13:50.715 Per-NS Atomic Units: Yes 00:13:50.715 Atomic Boundary Size (Normal): 0 00:13:50.715 Atomic Boundary Size (PFail): 0 00:13:50.715 Atomic Boundary Offset: 0 00:13:50.715 Maximum Single Source Range Length: 65535 00:13:50.715 Maximum Copy Length: 65535 00:13:50.715 Maximum Source Range Count: 1 00:13:50.715 NGUID/EUI64 Never Reused: No 00:13:50.715 Namespace Write Protected: No 00:13:50.715 Number of LBA Formats: 1 00:13:50.715 Current LBA Format: LBA Format #00 00:13:50.715 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:50.715 00:13:50.715 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:50.974 [2024-11-20 09:52:24.456995] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.246 Initializing NVMe Controllers 00:13:56.246 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.246 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:56.246 Initialization complete. Launching workers. 00:13:56.246 ======================================================== 00:13:56.246 Latency(us) 00:13:56.246 Device Information : IOPS MiB/s Average min max 00:13:56.246 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39947.86 156.05 3203.99 940.48 6661.67 00:13:56.246 ======================================================== 00:13:56.246 Total : 39947.86 156.05 3203.99 940.48 6661.67 00:13:56.246 00:13:56.246 [2024-11-20 09:52:29.477606] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.246 09:52:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:56.246 [2024-11-20 09:52:29.710674] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.518 Initializing NVMe Controllers 00:14:01.518 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:01.518 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:01.518 Initialization complete. Launching workers. 00:14:01.518 ======================================================== 00:14:01.518 Latency(us) 00:14:01.518 Device Information : IOPS MiB/s Average min max 00:14:01.518 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.33 62.71 7978.26 3049.14 11110.15 00:14:01.518 ======================================================== 00:14:01.518 Total : 16054.33 62.71 7978.26 3049.14 11110.15 00:14:01.518 00:14:01.518 [2024-11-20 09:52:34.757602] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.518 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:01.518 [2024-11-20 09:52:34.961553] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.793 [2024-11-20 09:52:40.080679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.793 Initializing NVMe Controllers 00:14:06.793 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:06.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:06.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:06.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:06.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:06.793 Initialization complete. Launching workers. 00:14:06.793 Starting thread on core 2 00:14:06.793 Starting thread on core 3 00:14:06.793 Starting thread on core 1 00:14:06.793 09:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:07.052 [2024-11-20 09:52:40.375575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.342 [2024-11-20 09:52:43.444634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.342 Initializing NVMe Controllers 00:14:10.342 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.342 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.342 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:10.342 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:10.342 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:10.342 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:10.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:10.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:10.343 Initialization complete. Launching workers. 00:14:10.343 Starting thread on core 1 with urgent priority queue 00:14:10.343 Starting thread on core 2 with urgent priority queue 00:14:10.343 Starting thread on core 3 with urgent priority queue 00:14:10.343 Starting thread on core 0 with urgent priority queue 00:14:10.343 SPDK bdev Controller (SPDK1 ) core 0: 2791.00 IO/s 35.83 secs/100000 ios 00:14:10.343 SPDK bdev Controller (SPDK1 ) core 1: 2666.33 IO/s 37.50 secs/100000 ios 00:14:10.343 SPDK bdev Controller (SPDK1 ) core 2: 2758.00 IO/s 36.26 secs/100000 ios 00:14:10.343 SPDK bdev Controller (SPDK1 ) core 3: 3379.00 IO/s 29.59 secs/100000 ios 00:14:10.343 ======================================================== 00:14:10.343 00:14:10.343 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:10.343 [2024-11-20 09:52:43.736679] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:10.343 Initializing NVMe Controllers 00:14:10.343 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.343 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:10.343 Namespace ID: 1 size: 0GB 00:14:10.343 Initialization complete. 00:14:10.343 INFO: using host memory buffer for IO 00:14:10.343 Hello world! 00:14:10.343 [2024-11-20 09:52:43.767876] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:10.343 09:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:10.602 [2024-11-20 09:52:44.048623] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:11.540 Initializing NVMe Controllers 00:14:11.540 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:11.540 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:11.540 Initialization complete. Launching workers. 00:14:11.540 submit (in ns) avg, min, max = 7700.7, 3125.7, 4000451.4 00:14:11.540 complete (in ns) avg, min, max = 16392.5, 1725.7, 3998121.0 00:14:11.540 00:14:11.540 Submit histogram 00:14:11.540 ================ 00:14:11.540 Range in us Cumulative Count 00:14:11.540 3.124 - 3.139: 0.0060% ( 1) 00:14:11.540 3.154 - 3.170: 0.0120% ( 1) 00:14:11.540 3.170 - 3.185: 0.0239% ( 2) 00:14:11.540 3.185 - 3.200: 0.0658% ( 7) 00:14:11.540 3.200 - 3.215: 0.4307% ( 61) 00:14:11.540 3.215 - 3.230: 1.5732% ( 191) 00:14:11.540 3.230 - 3.246: 3.3200% ( 292) 00:14:11.540 3.246 - 3.261: 5.8683% ( 426) 00:14:11.540 3.261 - 3.276: 10.9110% ( 843) 00:14:11.540 3.276 - 3.291: 16.7554% ( 977) 00:14:11.540 3.291 - 3.307: 23.1979% ( 1077) 00:14:11.540 3.307 - 3.322: 30.4122% ( 1206) 00:14:11.540 3.322 - 3.337: 36.8966% ( 1084) 00:14:11.540 3.337 - 3.352: 42.4179% ( 923) 00:14:11.540 3.352 - 3.368: 48.4357% ( 1006) 00:14:11.540 3.368 - 3.383: 54.6390% ( 1037) 00:14:11.540 3.383 - 3.398: 60.1603% ( 923) 00:14:11.540 3.398 - 3.413: 65.8910% ( 958) 00:14:11.540 3.413 - 3.429: 72.6925% ( 1137) 00:14:11.540 3.429 - 3.444: 77.0234% ( 724) 00:14:11.540 3.444 - 3.459: 81.4141% ( 734) 00:14:11.540 3.459 - 3.474: 84.5307% ( 521) 00:14:11.540 3.474 - 3.490: 86.2416% ( 286) 00:14:11.540 3.490 - 3.505: 87.6294% ( 232) 00:14:11.540 3.505 - 3.520: 88.2455% ( 103) 00:14:11.540 3.520 - 3.535: 88.6583% ( 69) 00:14:11.540 3.535 - 3.550: 89.1488% ( 82) 00:14:11.540 3.550 - 3.566: 89.7290% ( 97) 00:14:11.540 3.566 - 3.581: 90.4947% ( 128) 00:14:11.540 3.581 - 3.596: 91.3382% ( 141) 00:14:11.540 3.596 - 3.611: 92.2773% ( 157) 00:14:11.540 3.611 - 3.627: 93.1507% ( 146) 00:14:11.540 3.627 - 3.642: 94.0061% ( 143) 00:14:11.540 3.642 - 3.657: 94.9692% ( 161) 00:14:11.540 3.657 - 3.672: 95.9921% ( 171) 00:14:11.540 3.672 - 3.688: 96.7937% ( 134) 00:14:11.540 3.688 - 3.703: 97.5474% ( 126) 00:14:11.540 3.703 - 3.718: 98.1157% ( 95) 00:14:11.540 3.718 - 3.733: 98.5404% ( 71) 00:14:11.540 3.733 - 3.749: 98.8634% ( 54) 00:14:11.540 3.749 - 3.764: 99.0907% ( 38) 00:14:11.540 3.764 - 3.779: 99.2523% ( 27) 00:14:11.540 3.779 - 3.794: 99.3839% ( 22) 00:14:11.540 3.794 - 3.810: 99.5095% ( 21) 00:14:11.540 3.810 - 3.825: 99.5932% ( 14) 00:14:11.540 3.825 - 3.840: 99.6291% ( 6) 00:14:11.540 3.840 - 3.855: 99.6351% ( 1) 00:14:11.540 4.907 - 4.937: 99.6411% ( 1) 00:14:11.540 4.998 - 5.029: 99.6471% ( 1) 00:14:11.540 5.029 - 5.059: 99.6590% ( 2) 00:14:11.540 5.059 - 5.090: 99.6650% ( 1) 00:14:11.540 5.090 - 5.120: 99.6710% ( 1) 00:14:11.540 5.242 - 5.272: 99.6830% ( 2) 00:14:11.540 5.333 - 5.364: 99.6949% ( 2) 00:14:11.540 5.364 - 5.394: 99.7009% ( 1) 00:14:11.540 5.425 - 5.455: 99.7069% ( 1) 00:14:11.540 5.455 - 5.486: 99.7129% ( 1) 00:14:11.540 5.486 - 5.516: 99.7188% ( 1) 00:14:11.540 5.547 - 5.577: 99.7248% ( 1) 00:14:11.540 5.730 - 5.760: 99.7308% ( 1) 00:14:11.540 5.912 - 5.943: 99.7368% ( 1) 00:14:11.540 6.156 - 6.187: 99.7428% ( 1) 00:14:11.540 6.187 - 6.217: 99.7488% ( 1) 00:14:11.540 6.278 - 6.309: 99.7547% ( 1) 00:14:11.540 6.370 - 6.400: 99.7667% ( 2) 00:14:11.540 6.400 - 6.430: 99.7727% ( 1) 00:14:11.540 6.461 - 6.491: 99.7787% ( 1) 00:14:11.540 6.491 - 6.522: 99.7847% ( 1) 00:14:11.540 6.522 - 6.552: 99.7906% ( 1) 00:14:11.540 6.583 - 6.613: 99.7966% ( 1) 00:14:11.540 6.613 - 6.644: 99.8086% ( 2) 00:14:11.540 6.705 - 6.735: 99.8146% ( 1) 00:14:11.540 6.766 - 6.796: 99.8205% ( 1) 00:14:11.540 6.918 - 6.949: 99.8265% ( 1) 00:14:11.540 6.949 - 6.979: 99.8325% ( 1) 00:14:11.540 6.979 - 7.010: 99.8385% ( 1) 00:14:11.540 7.253 - 7.284: 99.8445% ( 1) 00:14:11.540 7.436 - 7.467: 99.8505% ( 1) 00:14:11.540 7.589 - 7.619: 99.8564% ( 1) 00:14:11.540 [2024-11-20 09:52:45.069559] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:11.540 7.863 - 7.924: 99.8624% ( 1) 00:14:11.540 7.985 - 8.046: 99.8684% ( 1) 00:14:11.540 11.520 - 11.581: 99.8744% ( 1) 00:14:11.540 13.592 - 13.653: 99.8804% ( 1) 00:14:11.540 15.543 - 15.604: 99.8863% ( 1) 00:14:11.540 19.139 - 19.261: 99.8923% ( 1) 00:14:11.540 3994.575 - 4025.783: 100.0000% ( 18) 00:14:11.540 00:14:11.540 Complete histogram 00:14:11.540 ================== 00:14:11.540 Range in us Cumulative Count 00:14:11.540 1.722 - 1.730: 0.0060% ( 1) 00:14:11.540 1.730 - 1.737: 0.0718% ( 11) 00:14:11.540 1.737 - 1.745: 0.1137% ( 7) 00:14:11.540 1.752 - 1.760: 0.1316% ( 3) 00:14:11.540 1.760 - 1.768: 0.1615% ( 5) 00:14:11.540 1.768 - 1.775: 1.1186% ( 160) 00:14:11.540 1.775 - 1.783: 10.6837% ( 1599) 00:14:11.540 1.783 - 1.790: 34.1389% ( 3921) 00:14:11.540 1.790 - 1.798: 50.2841% ( 2699) 00:14:11.540 1.798 - 1.806: 54.8005% ( 755) 00:14:11.540 1.806 - 1.813: 57.0318% ( 373) 00:14:11.540 1.813 - 1.821: 58.5751% ( 258) 00:14:11.540 1.821 - 1.829: 62.0925% ( 588) 00:14:11.540 1.829 - 1.836: 73.4103% ( 1892) 00:14:11.540 1.836 - 1.844: 85.4878% ( 2019) 00:14:11.540 1.844 - 1.851: 91.2544% ( 964) 00:14:11.540 1.851 - 1.859: 93.8266% ( 430) 00:14:11.540 1.859 - 1.867: 96.0400% ( 370) 00:14:11.540 1.867 - 1.874: 97.3201% ( 214) 00:14:11.540 1.874 - 1.882: 97.7927% ( 79) 00:14:11.540 1.882 - 1.890: 97.9602% ( 28) 00:14:11.540 1.890 - 1.897: 98.1037% ( 24) 00:14:11.540 1.897 - 1.905: 98.4028% ( 50) 00:14:11.540 1.905 - 1.912: 98.8156% ( 69) 00:14:11.540 1.912 - 1.920: 99.1207% ( 51) 00:14:11.541 1.920 - 1.928: 99.2822% ( 27) 00:14:11.541 1.928 - 1.935: 99.3540% ( 12) 00:14:11.541 1.935 - 1.943: 99.3779% ( 4) 00:14:11.541 1.943 - 1.950: 99.3898% ( 2) 00:14:11.541 1.950 - 1.966: 99.4138% ( 4) 00:14:11.541 1.966 - 1.981: 99.4257% ( 2) 00:14:11.541 1.981 - 1.996: 99.4497% ( 4) 00:14:11.541 1.996 - 2.011: 99.4616% ( 2) 00:14:11.541 2.042 - 2.057: 99.4676% ( 1) 00:14:11.541 2.057 - 2.072: 99.4736% ( 1) 00:14:11.541 2.408 - 2.423: 99.4796% ( 1) 00:14:11.541 3.337 - 3.352: 99.4856% ( 1) 00:14:11.541 3.490 - 3.505: 99.4915% ( 1) 00:14:11.541 3.535 - 3.550: 99.4975% ( 1) 00:14:11.541 3.550 - 3.566: 99.5035% ( 1) 00:14:11.541 3.596 - 3.611: 99.5095% ( 1) 00:14:11.541 3.611 - 3.627: 99.5155% ( 1) 00:14:11.541 3.642 - 3.657: 99.5214% ( 1) 00:14:11.541 3.870 - 3.886: 99.5274% ( 1) 00:14:11.541 4.053 - 4.084: 99.5334% ( 1) 00:14:11.541 4.267 - 4.297: 99.5394% ( 1) 00:14:11.541 4.297 - 4.328: 99.5454% ( 1) 00:14:11.541 4.632 - 4.663: 99.5514% ( 1) 00:14:11.541 4.785 - 4.815: 99.5573% ( 1) 00:14:11.541 4.876 - 4.907: 99.5633% ( 1) 00:14:11.541 5.029 - 5.059: 99.5693% ( 1) 00:14:11.541 5.211 - 5.242: 99.5753% ( 1) 00:14:11.541 5.303 - 5.333: 99.5813% ( 1) 00:14:11.541 5.394 - 5.425: 99.5872% ( 1) 00:14:11.541 5.760 - 5.790: 99.5932% ( 1) 00:14:11.541 5.851 - 5.882: 99.5992% ( 1) 00:14:11.541 6.217 - 6.248: 99.6052% ( 1) 00:14:11.541 6.491 - 6.522: 99.6112% ( 1) 00:14:11.541 6.522 - 6.552: 99.6172% ( 1) 00:14:11.541 10.179 - 10.240: 99.6231% ( 1) 00:14:11.541 12.983 - 13.044: 99.6291% ( 1) 00:14:11.541 14.141 - 14.202: 99.6351% ( 1) 00:14:11.541 3978.971 - 3994.575: 99.6530% ( 3) 00:14:11.541 3994.575 - 4025.783: 100.0000% ( 58) 00:14:11.541 00:14:11.541 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:11.541 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:11.541 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:11.541 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:11.541 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:11.800 [ 00:14:11.800 { 00:14:11.800 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.800 "subtype": "Discovery", 00:14:11.800 "listen_addresses": [], 00:14:11.800 "allow_any_host": true, 00:14:11.800 "hosts": [] 00:14:11.800 }, 00:14:11.800 { 00:14:11.800 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.800 "subtype": "NVMe", 00:14:11.800 "listen_addresses": [ 00:14:11.800 { 00:14:11.800 "trtype": "VFIOUSER", 00:14:11.800 "adrfam": "IPv4", 00:14:11.800 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.800 "trsvcid": "0" 00:14:11.800 } 00:14:11.800 ], 00:14:11.800 "allow_any_host": true, 00:14:11.800 "hosts": [], 00:14:11.800 "serial_number": "SPDK1", 00:14:11.800 "model_number": "SPDK bdev Controller", 00:14:11.800 "max_namespaces": 32, 00:14:11.800 "min_cntlid": 1, 00:14:11.800 "max_cntlid": 65519, 00:14:11.800 "namespaces": [ 00:14:11.800 { 00:14:11.800 "nsid": 1, 00:14:11.800 "bdev_name": "Malloc1", 00:14:11.800 "name": "Malloc1", 00:14:11.800 "nguid": "5354282CAD28429B8A144A33A7027367", 00:14:11.800 "uuid": "5354282c-ad28-429b-8a14-4a33a7027367" 00:14:11.800 } 00:14:11.800 ] 00:14:11.800 }, 00:14:11.800 { 00:14:11.800 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.800 "subtype": "NVMe", 00:14:11.800 "listen_addresses": [ 00:14:11.800 { 00:14:11.800 "trtype": "VFIOUSER", 00:14:11.800 "adrfam": "IPv4", 00:14:11.800 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.800 "trsvcid": "0" 00:14:11.800 } 00:14:11.800 ], 00:14:11.800 "allow_any_host": true, 00:14:11.800 "hosts": [], 00:14:11.800 "serial_number": "SPDK2", 00:14:11.800 "model_number": "SPDK bdev Controller", 00:14:11.800 "max_namespaces": 32, 00:14:11.800 "min_cntlid": 1, 00:14:11.800 "max_cntlid": 65519, 00:14:11.800 "namespaces": [ 00:14:11.800 { 00:14:11.800 "nsid": 1, 00:14:11.800 "bdev_name": "Malloc2", 00:14:11.800 "name": "Malloc2", 00:14:11.800 "nguid": "EFBB9595A82549CF82DC4D986E5956BD", 00:14:11.800 "uuid": "efbb9595-a825-49cf-82dc-4d986e5956bd" 00:14:11.800 } 00:14:11.800 ] 00:14:11.800 } 00:14:11.800 ] 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2620788 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:11.800 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:12.060 [2024-11-20 09:52:45.468585] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:12.060 Malloc3 00:14:12.060 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:12.318 [2024-11-20 09:52:45.707444] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:12.318 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.318 Asynchronous Event Request test 00:14:12.318 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:12.318 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:12.318 Registering asynchronous event callbacks... 00:14:12.318 Starting namespace attribute notice tests for all controllers... 00:14:12.318 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:12.318 aer_cb - Changed Namespace 00:14:12.318 Cleaning up... 00:14:12.579 [ 00:14:12.579 { 00:14:12.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:12.579 "subtype": "Discovery", 00:14:12.579 "listen_addresses": [], 00:14:12.579 "allow_any_host": true, 00:14:12.579 "hosts": [] 00:14:12.579 }, 00:14:12.579 { 00:14:12.579 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:12.579 "subtype": "NVMe", 00:14:12.579 "listen_addresses": [ 00:14:12.579 { 00:14:12.579 "trtype": "VFIOUSER", 00:14:12.579 "adrfam": "IPv4", 00:14:12.579 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:12.579 "trsvcid": "0" 00:14:12.579 } 00:14:12.579 ], 00:14:12.579 "allow_any_host": true, 00:14:12.579 "hosts": [], 00:14:12.579 "serial_number": "SPDK1", 00:14:12.579 "model_number": "SPDK bdev Controller", 00:14:12.579 "max_namespaces": 32, 00:14:12.579 "min_cntlid": 1, 00:14:12.579 "max_cntlid": 65519, 00:14:12.579 "namespaces": [ 00:14:12.579 { 00:14:12.579 "nsid": 1, 00:14:12.579 "bdev_name": "Malloc1", 00:14:12.579 "name": "Malloc1", 00:14:12.579 "nguid": "5354282CAD28429B8A144A33A7027367", 00:14:12.579 "uuid": "5354282c-ad28-429b-8a14-4a33a7027367" 00:14:12.579 }, 00:14:12.579 { 00:14:12.579 "nsid": 2, 00:14:12.579 "bdev_name": "Malloc3", 00:14:12.579 "name": "Malloc3", 00:14:12.579 "nguid": "A97EC59B5D7140FF9C0D17A60075610B", 00:14:12.579 "uuid": "a97ec59b-5d71-40ff-9c0d-17a60075610b" 00:14:12.579 } 00:14:12.579 ] 00:14:12.579 }, 00:14:12.579 { 00:14:12.579 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:12.579 "subtype": "NVMe", 00:14:12.579 "listen_addresses": [ 00:14:12.579 { 00:14:12.579 "trtype": "VFIOUSER", 00:14:12.579 "adrfam": "IPv4", 00:14:12.579 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:12.579 "trsvcid": "0" 00:14:12.579 } 00:14:12.579 ], 00:14:12.579 "allow_any_host": true, 00:14:12.579 "hosts": [], 00:14:12.579 "serial_number": "SPDK2", 00:14:12.579 "model_number": "SPDK bdev Controller", 00:14:12.579 "max_namespaces": 32, 00:14:12.579 "min_cntlid": 1, 00:14:12.579 "max_cntlid": 65519, 00:14:12.579 "namespaces": [ 00:14:12.579 { 00:14:12.579 "nsid": 1, 00:14:12.579 "bdev_name": "Malloc2", 00:14:12.579 "name": "Malloc2", 00:14:12.579 "nguid": "EFBB9595A82549CF82DC4D986E5956BD", 00:14:12.579 "uuid": "efbb9595-a825-49cf-82dc-4d986e5956bd" 00:14:12.579 } 00:14:12.579 ] 00:14:12.579 } 00:14:12.579 ] 00:14:12.579 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2620788 00:14:12.579 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.579 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:12.579 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:12.579 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:12.579 [2024-11-20 09:52:45.955857] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:14:12.579 [2024-11-20 09:52:45.955901] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620806 ] 00:14:12.579 [2024-11-20 09:52:46.002691] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:12.579 [2024-11-20 09:52:46.009418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:12.579 [2024-11-20 09:52:46.009441] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc703007000 00:14:12.580 [2024-11-20 09:52:46.010418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.011419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.012423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.013437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.014436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.015448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.016454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.017460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.580 [2024-11-20 09:52:46.018469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:12.580 [2024-11-20 09:52:46.018479] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc702ffc000 00:14:12.580 [2024-11-20 09:52:46.019392] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:12.580 [2024-11-20 09:52:46.030752] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:12.580 [2024-11-20 09:52:46.030777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:12.580 [2024-11-20 09:52:46.035862] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:12.580 [2024-11-20 09:52:46.035900] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:12.580 [2024-11-20 09:52:46.035966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:12.580 [2024-11-20 09:52:46.035979] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:12.580 [2024-11-20 09:52:46.035984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:12.580 [2024-11-20 09:52:46.036863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:12.580 [2024-11-20 09:52:46.036873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:12.580 [2024-11-20 09:52:46.036879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:12.580 [2024-11-20 09:52:46.037870] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:12.580 [2024-11-20 09:52:46.037879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:12.580 [2024-11-20 09:52:46.037886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.038873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:12.580 [2024-11-20 09:52:46.038882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.039886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:12.580 [2024-11-20 09:52:46.039895] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:12.580 [2024-11-20 09:52:46.039899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.039905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.040013] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:12.580 [2024-11-20 09:52:46.040017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.040022] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:12.580 [2024-11-20 09:52:46.040895] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:12.580 [2024-11-20 09:52:46.041900] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:12.580 [2024-11-20 09:52:46.042910] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:12.580 [2024-11-20 09:52:46.043912] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:12.580 [2024-11-20 09:52:46.043949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:12.580 [2024-11-20 09:52:46.044927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:12.580 [2024-11-20 09:52:46.044935] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:12.580 [2024-11-20 09:52:46.044940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.044956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:12.580 [2024-11-20 09:52:46.044963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.044974] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.580 [2024-11-20 09:52:46.044978] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.580 [2024-11-20 09:52:46.044981] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.580 [2024-11-20 09:52:46.044992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.580 [2024-11-20 09:52:46.051210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:12.580 [2024-11-20 09:52:46.051224] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:12.580 [2024-11-20 09:52:46.051228] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:12.580 [2024-11-20 09:52:46.051233] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:12.580 [2024-11-20 09:52:46.051237] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:12.580 [2024-11-20 09:52:46.051244] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:12.580 [2024-11-20 09:52:46.051248] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:12.580 [2024-11-20 09:52:46.051252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.051260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.051270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:12.580 [2024-11-20 09:52:46.059206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:12.580 [2024-11-20 09:52:46.059217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.580 [2024-11-20 09:52:46.059225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.580 [2024-11-20 09:52:46.059233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.580 [2024-11-20 09:52:46.059242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.580 [2024-11-20 09:52:46.059247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.059253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.059261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:12.580 [2024-11-20 09:52:46.067206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:12.580 [2024-11-20 09:52:46.067216] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:12.580 [2024-11-20 09:52:46.067221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.067227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.067232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.067240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:12.580 [2024-11-20 09:52:46.075206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:12.580 [2024-11-20 09:52:46.075261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.075268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:12.580 [2024-11-20 09:52:46.075275] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:12.580 [2024-11-20 09:52:46.075280] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:12.580 [2024-11-20 09:52:46.075283] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.580 [2024-11-20 09:52:46.075289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:12.580 [2024-11-20 09:52:46.083208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.083220] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:12.581 [2024-11-20 09:52:46.083231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.083238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.083244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.581 [2024-11-20 09:52:46.083248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.581 [2024-11-20 09:52:46.083251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.581 [2024-11-20 09:52:46.083257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.091221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.091229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.091235] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.581 [2024-11-20 09:52:46.091239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.581 [2024-11-20 09:52:46.091242] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.581 [2024-11-20 09:52:46.091248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.099219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099264] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:12.581 [2024-11-20 09:52:46.099269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:12.581 [2024-11-20 09:52:46.099273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:12.581 [2024-11-20 09:52:46.099288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.107208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.107222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.115207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.115220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.123208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.123222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.131207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.131224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:12.581 [2024-11-20 09:52:46.131233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:12.581 [2024-11-20 09:52:46.131238] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:12.581 [2024-11-20 09:52:46.131243] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:12.581 [2024-11-20 09:52:46.131247] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:12.581 [2024-11-20 09:52:46.131253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:12.581 [2024-11-20 09:52:46.131262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:12.581 [2024-11-20 09:52:46.131269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:12.581 [2024-11-20 09:52:46.131273] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.581 [2024-11-20 09:52:46.131280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.131289] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:12.581 [2024-11-20 09:52:46.131294] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.581 [2024-11-20 09:52:46.131298] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.581 [2024-11-20 09:52:46.131304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.131312] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:12.581 [2024-11-20 09:52:46.131318] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:12.581 [2024-11-20 09:52:46.131323] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.581 [2024-11-20 09:52:46.131330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:12.581 [2024-11-20 09:52:46.139208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.139224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.139233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:12.581 [2024-11-20 09:52:46.139239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:12.581 ===================================================== 00:14:12.581 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:12.581 ===================================================== 00:14:12.581 Controller Capabilities/Features 00:14:12.581 ================================ 00:14:12.581 Vendor ID: 4e58 00:14:12.581 Subsystem Vendor ID: 4e58 00:14:12.581 Serial Number: SPDK2 00:14:12.581 Model Number: SPDK bdev Controller 00:14:12.581 Firmware Version: 25.01 00:14:12.581 Recommended Arb Burst: 6 00:14:12.581 IEEE OUI Identifier: 8d 6b 50 00:14:12.581 Multi-path I/O 00:14:12.581 May have multiple subsystem ports: Yes 00:14:12.581 May have multiple controllers: Yes 00:14:12.581 Associated with SR-IOV VF: No 00:14:12.581 Max Data Transfer Size: 131072 00:14:12.581 Max Number of Namespaces: 32 00:14:12.581 Max Number of I/O Queues: 127 00:14:12.581 NVMe Specification Version (VS): 1.3 00:14:12.581 NVMe Specification Version (Identify): 1.3 00:14:12.581 Maximum Queue Entries: 256 00:14:12.581 Contiguous Queues Required: Yes 00:14:12.581 Arbitration Mechanisms Supported 00:14:12.581 Weighted Round Robin: Not Supported 00:14:12.581 Vendor Specific: Not Supported 00:14:12.581 Reset Timeout: 15000 ms 00:14:12.581 Doorbell Stride: 4 bytes 00:14:12.581 NVM Subsystem Reset: Not Supported 00:14:12.581 Command Sets Supported 00:14:12.581 NVM Command Set: Supported 00:14:12.581 Boot Partition: Not Supported 00:14:12.581 Memory Page Size Minimum: 4096 bytes 00:14:12.581 Memory Page Size Maximum: 4096 bytes 00:14:12.581 Persistent Memory Region: Not Supported 00:14:12.581 Optional Asynchronous Events Supported 00:14:12.581 Namespace Attribute Notices: Supported 00:14:12.581 Firmware Activation Notices: Not Supported 00:14:12.581 ANA Change Notices: Not Supported 00:14:12.581 PLE Aggregate Log Change Notices: Not Supported 00:14:12.581 LBA Status Info Alert Notices: Not Supported 00:14:12.581 EGE Aggregate Log Change Notices: Not Supported 00:14:12.581 Normal NVM Subsystem Shutdown event: Not Supported 00:14:12.581 Zone Descriptor Change Notices: Not Supported 00:14:12.581 Discovery Log Change Notices: Not Supported 00:14:12.581 Controller Attributes 00:14:12.581 128-bit Host Identifier: Supported 00:14:12.581 Non-Operational Permissive Mode: Not Supported 00:14:12.581 NVM Sets: Not Supported 00:14:12.581 Read Recovery Levels: Not Supported 00:14:12.581 Endurance Groups: Not Supported 00:14:12.581 Predictable Latency Mode: Not Supported 00:14:12.581 Traffic Based Keep ALive: Not Supported 00:14:12.581 Namespace Granularity: Not Supported 00:14:12.581 SQ Associations: Not Supported 00:14:12.581 UUID List: Not Supported 00:14:12.581 Multi-Domain Subsystem: Not Supported 00:14:12.581 Fixed Capacity Management: Not Supported 00:14:12.582 Variable Capacity Management: Not Supported 00:14:12.582 Delete Endurance Group: Not Supported 00:14:12.582 Delete NVM Set: Not Supported 00:14:12.582 Extended LBA Formats Supported: Not Supported 00:14:12.582 Flexible Data Placement Supported: Not Supported 00:14:12.582 00:14:12.582 Controller Memory Buffer Support 00:14:12.582 ================================ 00:14:12.582 Supported: No 00:14:12.582 00:14:12.582 Persistent Memory Region Support 00:14:12.582 ================================ 00:14:12.582 Supported: No 00:14:12.582 00:14:12.582 Admin Command Set Attributes 00:14:12.582 ============================ 00:14:12.582 Security Send/Receive: Not Supported 00:14:12.582 Format NVM: Not Supported 00:14:12.582 Firmware Activate/Download: Not Supported 00:14:12.582 Namespace Management: Not Supported 00:14:12.582 Device Self-Test: Not Supported 00:14:12.582 Directives: Not Supported 00:14:12.582 NVMe-MI: Not Supported 00:14:12.582 Virtualization Management: Not Supported 00:14:12.582 Doorbell Buffer Config: Not Supported 00:14:12.582 Get LBA Status Capability: Not Supported 00:14:12.582 Command & Feature Lockdown Capability: Not Supported 00:14:12.582 Abort Command Limit: 4 00:14:12.582 Async Event Request Limit: 4 00:14:12.582 Number of Firmware Slots: N/A 00:14:12.582 Firmware Slot 1 Read-Only: N/A 00:14:12.582 Firmware Activation Without Reset: N/A 00:14:12.582 Multiple Update Detection Support: N/A 00:14:12.582 Firmware Update Granularity: No Information Provided 00:14:12.582 Per-Namespace SMART Log: No 00:14:12.582 Asymmetric Namespace Access Log Page: Not Supported 00:14:12.582 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:12.582 Command Effects Log Page: Supported 00:14:12.582 Get Log Page Extended Data: Supported 00:14:12.582 Telemetry Log Pages: Not Supported 00:14:12.582 Persistent Event Log Pages: Not Supported 00:14:12.582 Supported Log Pages Log Page: May Support 00:14:12.582 Commands Supported & Effects Log Page: Not Supported 00:14:12.582 Feature Identifiers & Effects Log Page:May Support 00:14:12.582 NVMe-MI Commands & Effects Log Page: May Support 00:14:12.582 Data Area 4 for Telemetry Log: Not Supported 00:14:12.582 Error Log Page Entries Supported: 128 00:14:12.582 Keep Alive: Supported 00:14:12.582 Keep Alive Granularity: 10000 ms 00:14:12.582 00:14:12.582 NVM Command Set Attributes 00:14:12.582 ========================== 00:14:12.582 Submission Queue Entry Size 00:14:12.582 Max: 64 00:14:12.582 Min: 64 00:14:12.582 Completion Queue Entry Size 00:14:12.582 Max: 16 00:14:12.582 Min: 16 00:14:12.582 Number of Namespaces: 32 00:14:12.582 Compare Command: Supported 00:14:12.582 Write Uncorrectable Command: Not Supported 00:14:12.582 Dataset Management Command: Supported 00:14:12.582 Write Zeroes Command: Supported 00:14:12.582 Set Features Save Field: Not Supported 00:14:12.582 Reservations: Not Supported 00:14:12.582 Timestamp: Not Supported 00:14:12.582 Copy: Supported 00:14:12.582 Volatile Write Cache: Present 00:14:12.582 Atomic Write Unit (Normal): 1 00:14:12.582 Atomic Write Unit (PFail): 1 00:14:12.582 Atomic Compare & Write Unit: 1 00:14:12.582 Fused Compare & Write: Supported 00:14:12.582 Scatter-Gather List 00:14:12.582 SGL Command Set: Supported (Dword aligned) 00:14:12.582 SGL Keyed: Not Supported 00:14:12.582 SGL Bit Bucket Descriptor: Not Supported 00:14:12.582 SGL Metadata Pointer: Not Supported 00:14:12.582 Oversized SGL: Not Supported 00:14:12.582 SGL Metadata Address: Not Supported 00:14:12.582 SGL Offset: Not Supported 00:14:12.582 Transport SGL Data Block: Not Supported 00:14:12.582 Replay Protected Memory Block: Not Supported 00:14:12.582 00:14:12.582 Firmware Slot Information 00:14:12.582 ========================= 00:14:12.582 Active slot: 1 00:14:12.582 Slot 1 Firmware Revision: 25.01 00:14:12.582 00:14:12.582 00:14:12.582 Commands Supported and Effects 00:14:12.582 ============================== 00:14:12.582 Admin Commands 00:14:12.582 -------------- 00:14:12.582 Get Log Page (02h): Supported 00:14:12.582 Identify (06h): Supported 00:14:12.582 Abort (08h): Supported 00:14:12.582 Set Features (09h): Supported 00:14:12.582 Get Features (0Ah): Supported 00:14:12.582 Asynchronous Event Request (0Ch): Supported 00:14:12.582 Keep Alive (18h): Supported 00:14:12.582 I/O Commands 00:14:12.582 ------------ 00:14:12.582 Flush (00h): Supported LBA-Change 00:14:12.582 Write (01h): Supported LBA-Change 00:14:12.582 Read (02h): Supported 00:14:12.582 Compare (05h): Supported 00:14:12.582 Write Zeroes (08h): Supported LBA-Change 00:14:12.582 Dataset Management (09h): Supported LBA-Change 00:14:12.582 Copy (19h): Supported LBA-Change 00:14:12.582 00:14:12.582 Error Log 00:14:12.582 ========= 00:14:12.582 00:14:12.582 Arbitration 00:14:12.582 =========== 00:14:12.582 Arbitration Burst: 1 00:14:12.582 00:14:12.582 Power Management 00:14:12.582 ================ 00:14:12.582 Number of Power States: 1 00:14:12.582 Current Power State: Power State #0 00:14:12.582 Power State #0: 00:14:12.582 Max Power: 0.00 W 00:14:12.582 Non-Operational State: Operational 00:14:12.582 Entry Latency: Not Reported 00:14:12.582 Exit Latency: Not Reported 00:14:12.582 Relative Read Throughput: 0 00:14:12.582 Relative Read Latency: 0 00:14:12.582 Relative Write Throughput: 0 00:14:12.582 Relative Write Latency: 0 00:14:12.582 Idle Power: Not Reported 00:14:12.582 Active Power: Not Reported 00:14:12.582 Non-Operational Permissive Mode: Not Supported 00:14:12.582 00:14:12.582 Health Information 00:14:12.582 ================== 00:14:12.582 Critical Warnings: 00:14:12.582 Available Spare Space: OK 00:14:12.582 Temperature: OK 00:14:12.582 Device Reliability: OK 00:14:12.582 Read Only: No 00:14:12.582 Volatile Memory Backup: OK 00:14:12.582 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:12.582 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:12.582 Available Spare: 0% 00:14:12.582 Available Sp[2024-11-20 09:52:46.139327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:12.582 [2024-11-20 09:52:46.147209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:12.582 [2024-11-20 09:52:46.147240] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:12.582 [2024-11-20 09:52:46.147248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.582 [2024-11-20 09:52:46.147254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.582 [2024-11-20 09:52:46.147259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.582 [2024-11-20 09:52:46.147264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.582 [2024-11-20 09:52:46.147313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:12.582 [2024-11-20 09:52:46.147327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:12.582 [2024-11-20 09:52:46.148311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.582 [2024-11-20 09:52:46.148354] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:12.582 [2024-11-20 09:52:46.148360] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:12.582 [2024-11-20 09:52:46.149318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:12.582 [2024-11-20 09:52:46.149329] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:12.582 [2024-11-20 09:52:46.149374] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:12.582 [2024-11-20 09:52:46.150460] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:12.841 are Threshold: 0% 00:14:12.841 Life Percentage Used: 0% 00:14:12.841 Data Units Read: 0 00:14:12.841 Data Units Written: 0 00:14:12.841 Host Read Commands: 0 00:14:12.841 Host Write Commands: 0 00:14:12.841 Controller Busy Time: 0 minutes 00:14:12.841 Power Cycles: 0 00:14:12.841 Power On Hours: 0 hours 00:14:12.841 Unsafe Shutdowns: 0 00:14:12.841 Unrecoverable Media Errors: 0 00:14:12.841 Lifetime Error Log Entries: 0 00:14:12.841 Warning Temperature Time: 0 minutes 00:14:12.841 Critical Temperature Time: 0 minutes 00:14:12.841 00:14:12.841 Number of Queues 00:14:12.841 ================ 00:14:12.841 Number of I/O Submission Queues: 127 00:14:12.841 Number of I/O Completion Queues: 127 00:14:12.841 00:14:12.841 Active Namespaces 00:14:12.842 ================= 00:14:12.842 Namespace ID:1 00:14:12.842 Error Recovery Timeout: Unlimited 00:14:12.842 Command Set Identifier: NVM (00h) 00:14:12.842 Deallocate: Supported 00:14:12.842 Deallocated/Unwritten Error: Not Supported 00:14:12.842 Deallocated Read Value: Unknown 00:14:12.842 Deallocate in Write Zeroes: Not Supported 00:14:12.842 Deallocated Guard Field: 0xFFFF 00:14:12.842 Flush: Supported 00:14:12.842 Reservation: Supported 00:14:12.842 Namespace Sharing Capabilities: Multiple Controllers 00:14:12.842 Size (in LBAs): 131072 (0GiB) 00:14:12.842 Capacity (in LBAs): 131072 (0GiB) 00:14:12.842 Utilization (in LBAs): 131072 (0GiB) 00:14:12.842 NGUID: EFBB9595A82549CF82DC4D986E5956BD 00:14:12.842 UUID: efbb9595-a825-49cf-82dc-4d986e5956bd 00:14:12.842 Thin Provisioning: Not Supported 00:14:12.842 Per-NS Atomic Units: Yes 00:14:12.842 Atomic Boundary Size (Normal): 0 00:14:12.842 Atomic Boundary Size (PFail): 0 00:14:12.842 Atomic Boundary Offset: 0 00:14:12.842 Maximum Single Source Range Length: 65535 00:14:12.842 Maximum Copy Length: 65535 00:14:12.842 Maximum Source Range Count: 1 00:14:12.842 NGUID/EUI64 Never Reused: No 00:14:12.842 Namespace Write Protected: No 00:14:12.842 Number of LBA Formats: 1 00:14:12.842 Current LBA Format: LBA Format #00 00:14:12.842 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:12.842 00:14:12.842 09:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:12.842 [2024-11-20 09:52:46.387427] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.115 Initializing NVMe Controllers 00:14:18.115 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.115 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:18.115 Initialization complete. Launching workers. 00:14:18.115 ======================================================== 00:14:18.115 Latency(us) 00:14:18.115 Device Information : IOPS MiB/s Average min max 00:14:18.115 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39984.40 156.19 3201.66 942.71 6670.40 00:14:18.115 ======================================================== 00:14:18.115 Total : 39984.40 156.19 3201.66 942.71 6670.40 00:14:18.115 00:14:18.115 [2024-11-20 09:52:51.489459] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.115 09:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:18.373 [2024-11-20 09:52:51.721186] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.646 Initializing NVMe Controllers 00:14:23.646 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:23.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:23.646 Initialization complete. Launching workers. 00:14:23.646 ======================================================== 00:14:23.646 Latency(us) 00:14:23.646 Device Information : IOPS MiB/s Average min max 00:14:23.646 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.52 156.05 3204.01 929.95 7046.99 00:14:23.646 ======================================================== 00:14:23.646 Total : 39947.52 156.05 3204.01 929.95 7046.99 00:14:23.646 00:14:23.646 [2024-11-20 09:52:56.745204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.646 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:23.646 [2024-11-20 09:52:56.946456] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.918 [2024-11-20 09:53:02.107300] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.918 Initializing NVMe Controllers 00:14:28.918 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:28.918 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:28.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:28.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:28.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:28.918 Initialization complete. Launching workers. 00:14:28.918 Starting thread on core 2 00:14:28.918 Starting thread on core 3 00:14:28.918 Starting thread on core 1 00:14:28.918 09:53:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:28.918 [2024-11-20 09:53:02.399099] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:32.210 [2024-11-20 09:53:05.460196] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:32.210 Initializing NVMe Controllers 00:14:32.210 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.210 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:32.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:32.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:32.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:32.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:32.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:32.210 Initialization complete. Launching workers. 00:14:32.210 Starting thread on core 1 with urgent priority queue 00:14:32.210 Starting thread on core 2 with urgent priority queue 00:14:32.210 Starting thread on core 3 with urgent priority queue 00:14:32.210 Starting thread on core 0 with urgent priority queue 00:14:32.210 SPDK bdev Controller (SPDK2 ) core 0: 5282.00 IO/s 18.93 secs/100000 ios 00:14:32.210 SPDK bdev Controller (SPDK2 ) core 1: 5341.00 IO/s 18.72 secs/100000 ios 00:14:32.210 SPDK bdev Controller (SPDK2 ) core 2: 7545.33 IO/s 13.25 secs/100000 ios 00:14:32.210 SPDK bdev Controller (SPDK2 ) core 3: 6326.33 IO/s 15.81 secs/100000 ios 00:14:32.210 ======================================================== 00:14:32.210 00:14:32.210 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:32.210 [2024-11-20 09:53:05.746604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:32.210 Initializing NVMe Controllers 00:14:32.210 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.210 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:32.210 Namespace ID: 1 size: 0GB 00:14:32.210 Initialization complete. 00:14:32.210 INFO: using host memory buffer for IO 00:14:32.210 Hello world! 00:14:32.210 [2024-11-20 09:53:05.758678] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:32.469 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:32.469 [2024-11-20 09:53:06.045620] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.849 Initializing NVMe Controllers 00:14:33.849 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.849 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:33.849 Initialization complete. Launching workers. 00:14:33.849 submit (in ns) avg, min, max = 6475.8, 3189.5, 3999787.6 00:14:33.849 complete (in ns) avg, min, max = 22697.3, 1768.6, 4054139.0 00:14:33.849 00:14:33.849 Submit histogram 00:14:33.849 ================ 00:14:33.849 Range in us Cumulative Count 00:14:33.849 3.185 - 3.200: 0.0832% ( 14) 00:14:33.849 3.200 - 3.215: 0.6957% ( 103) 00:14:33.849 3.215 - 3.230: 3.1335% ( 410) 00:14:33.849 3.230 - 3.246: 6.7606% ( 610) 00:14:33.849 3.246 - 3.261: 11.2796% ( 760) 00:14:33.849 3.261 - 3.276: 16.8748% ( 941) 00:14:33.849 3.276 - 3.291: 22.8981% ( 1013) 00:14:33.849 3.291 - 3.307: 28.7014% ( 976) 00:14:33.849 3.307 - 3.322: 34.9506% ( 1051) 00:14:33.849 3.322 - 3.337: 40.9561% ( 1010) 00:14:33.849 3.337 - 3.352: 46.4205% ( 919) 00:14:33.849 3.352 - 3.368: 51.7838% ( 902) 00:14:33.849 3.368 - 3.383: 59.2342% ( 1253) 00:14:33.849 3.383 - 3.398: 65.6380% ( 1077) 00:14:33.849 3.398 - 3.413: 70.4305% ( 806) 00:14:33.849 3.413 - 3.429: 75.2289% ( 807) 00:14:33.849 3.429 - 3.444: 79.2009% ( 668) 00:14:33.849 3.444 - 3.459: 82.5009% ( 555) 00:14:33.849 3.459 - 3.474: 84.7009% ( 370) 00:14:33.849 3.474 - 3.490: 86.0447% ( 226) 00:14:33.849 3.490 - 3.505: 87.0674% ( 172) 00:14:33.849 3.505 - 3.520: 87.8404% ( 130) 00:14:33.849 3.520 - 3.535: 88.4707% ( 106) 00:14:33.849 3.535 - 3.550: 89.2437% ( 130) 00:14:33.849 3.550 - 3.566: 90.0166% ( 130) 00:14:33.849 3.566 - 3.581: 90.8550% ( 141) 00:14:33.849 3.581 - 3.596: 91.6637% ( 136) 00:14:33.849 3.596 - 3.611: 92.6032% ( 158) 00:14:33.849 3.611 - 3.627: 93.6437% ( 175) 00:14:33.849 3.627 - 3.642: 94.6605% ( 171) 00:14:33.849 3.642 - 3.657: 95.4454% ( 132) 00:14:33.849 3.657 - 3.672: 96.2600% ( 137) 00:14:33.849 3.672 - 3.688: 96.9497% ( 116) 00:14:33.849 3.688 - 3.703: 97.6038% ( 110) 00:14:33.849 3.703 - 3.718: 98.0557% ( 76) 00:14:33.849 3.718 - 3.733: 98.5908% ( 90) 00:14:33.849 3.733 - 3.749: 98.8346% ( 41) 00:14:33.849 3.749 - 3.764: 99.1022% ( 45) 00:14:33.849 3.764 - 3.779: 99.3281% ( 38) 00:14:33.849 3.779 - 3.794: 99.4232% ( 16) 00:14:33.849 3.794 - 3.810: 99.4827% ( 10) 00:14:33.849 3.810 - 3.825: 99.5184% ( 6) 00:14:33.849 3.825 - 3.840: 99.5362% ( 3) 00:14:33.849 3.840 - 3.855: 99.5957% ( 10) 00:14:33.849 3.855 - 3.870: 99.6016% ( 1) 00:14:33.849 3.870 - 3.886: 99.6135% ( 2) 00:14:33.849 3.901 - 3.931: 99.6254% ( 2) 00:14:33.849 4.145 - 4.175: 99.6313% ( 1) 00:14:33.849 4.846 - 4.876: 99.6373% ( 1) 00:14:33.849 4.876 - 4.907: 99.6492% ( 2) 00:14:33.849 4.968 - 4.998: 99.6551% ( 1) 00:14:33.849 4.998 - 5.029: 99.6611% ( 1) 00:14:33.849 5.029 - 5.059: 99.6670% ( 1) 00:14:33.849 5.120 - 5.150: 99.6730% ( 1) 00:14:33.849 5.211 - 5.242: 99.6789% ( 1) 00:14:33.849 5.272 - 5.303: 99.6849% ( 1) 00:14:33.849 5.303 - 5.333: 99.6908% ( 1) 00:14:33.849 5.333 - 5.364: 99.7086% ( 3) 00:14:33.849 5.394 - 5.425: 99.7146% ( 1) 00:14:33.849 5.425 - 5.455: 99.7205% ( 1) 00:14:33.849 5.455 - 5.486: 99.7265% ( 1) 00:14:33.849 5.486 - 5.516: 99.7384% ( 2) 00:14:33.849 5.547 - 5.577: 99.7443% ( 1) 00:14:33.849 5.730 - 5.760: 99.7562% ( 2) 00:14:33.849 5.790 - 5.821: 99.7622% ( 1) 00:14:33.849 5.882 - 5.912: 99.7681% ( 1) 00:14:33.849 5.912 - 5.943: 99.7741% ( 1) 00:14:33.849 5.973 - 6.004: 99.7859% ( 2) 00:14:33.849 6.034 - 6.065: 99.7978% ( 2) 00:14:33.849 6.065 - 6.095: 99.8038% ( 1) 00:14:33.849 6.095 - 6.126: 99.8097% ( 1) 00:14:33.849 6.156 - 6.187: 99.8216% ( 2) 00:14:33.849 6.217 - 6.248: 99.8276% ( 1) 00:14:33.849 6.248 - 6.278: 99.8335% ( 1) 00:14:33.849 6.309 - 6.339: 99.8395% ( 1) 00:14:33.849 6.339 - 6.370: 99.8454% ( 1) 00:14:33.849 6.370 - 6.400: 99.8513% ( 1) 00:14:33.849 6.430 - 6.461: 99.8573% ( 1) 00:14:33.849 6.461 - 6.491: 99.8632% ( 1) 00:14:33.849 [2024-11-20 09:53:07.137188] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.849 6.491 - 6.522: 99.8692% ( 1) 00:14:33.849 6.552 - 6.583: 99.8751% ( 1) 00:14:33.849 6.583 - 6.613: 99.8811% ( 1) 00:14:33.849 6.644 - 6.674: 99.8870% ( 1) 00:14:33.849 6.857 - 6.888: 99.8930% ( 1) 00:14:33.849 7.375 - 7.406: 99.8989% ( 1) 00:14:33.849 7.558 - 7.589: 99.9049% ( 1) 00:14:33.849 8.107 - 8.168: 99.9108% ( 1) 00:14:33.849 8.168 - 8.229: 99.9168% ( 1) 00:14:33.849 11.032 - 11.093: 99.9227% ( 1) 00:14:33.849 3994.575 - 4025.783: 100.0000% ( 13) 00:14:33.849 00:14:33.849 Complete histogram 00:14:33.849 ================== 00:14:33.849 Range in us Cumulative Count 00:14:33.849 1.768 - 1.775: 0.0178% ( 3) 00:14:33.849 1.775 - 1.783: 0.1724% ( 26) 00:14:33.849 1.783 - 1.790: 0.7492% ( 97) 00:14:33.849 1.790 - 1.798: 1.5103% ( 128) 00:14:33.849 1.798 - 1.806: 2.1643% ( 110) 00:14:33.849 1.806 - 1.813: 2.8125% ( 109) 00:14:33.849 1.813 - 1.821: 3.3833% ( 96) 00:14:33.849 1.821 - 1.829: 7.0044% ( 609) 00:14:33.849 1.829 - 1.836: 27.3516% ( 3422) 00:14:33.849 1.836 - 1.844: 60.1439% ( 5515) 00:14:33.849 1.844 - 1.851: 81.4068% ( 3576) 00:14:33.849 1.851 - 1.859: 89.3329% ( 1333) 00:14:33.849 1.859 - 1.867: 92.8767% ( 596) 00:14:33.849 1.867 - 1.874: 95.0826% ( 371) 00:14:33.849 1.874 - 1.882: 96.1113% ( 173) 00:14:33.849 1.882 - 1.890: 96.4919% ( 64) 00:14:33.849 1.890 - 1.897: 96.8605% ( 62) 00:14:33.849 1.897 - 1.905: 97.3481% ( 82) 00:14:33.849 1.905 - 1.912: 97.9070% ( 94) 00:14:33.849 1.912 - 1.920: 98.3530% ( 75) 00:14:33.849 1.920 - 1.928: 98.7157% ( 61) 00:14:33.849 1.928 - 1.935: 98.9594% ( 41) 00:14:33.849 1.935 - 1.943: 99.0903% ( 22) 00:14:33.849 1.943 - 1.950: 99.1616% ( 12) 00:14:33.849 1.950 - 1.966: 99.2508% ( 15) 00:14:33.849 1.966 - 1.981: 99.2567% ( 1) 00:14:33.849 1.981 - 1.996: 99.2686% ( 2) 00:14:33.849 1.996 - 2.011: 99.2865% ( 3) 00:14:33.849 2.011 - 2.027: 99.2924% ( 1) 00:14:33.849 2.027 - 2.042: 99.2984% ( 1) 00:14:33.849 2.057 - 2.072: 99.3103% ( 2) 00:14:33.849 2.301 - 2.316: 99.3162% ( 1) 00:14:33.849 2.392 - 2.408: 99.3222% ( 1) 00:14:33.849 3.368 - 3.383: 99.3281% ( 1) 00:14:33.849 3.429 - 3.444: 99.3340% ( 1) 00:14:33.849 3.627 - 3.642: 99.3400% ( 1) 00:14:33.849 3.718 - 3.733: 99.3459% ( 1) 00:14:33.849 3.840 - 3.855: 99.3519% ( 1) 00:14:33.849 3.901 - 3.931: 99.3578% ( 1) 00:14:33.849 4.114 - 4.145: 99.3638% ( 1) 00:14:33.849 4.145 - 4.175: 99.3697% ( 1) 00:14:33.849 4.236 - 4.267: 99.3816% ( 2) 00:14:33.849 4.297 - 4.328: 99.3876% ( 1) 00:14:33.849 4.328 - 4.358: 99.3935% ( 1) 00:14:33.849 4.510 - 4.541: 99.3995% ( 1) 00:14:33.849 4.541 - 4.571: 99.4113% ( 2) 00:14:33.849 4.632 - 4.663: 99.4173% ( 1) 00:14:33.849 4.693 - 4.724: 99.4232% ( 1) 00:14:33.849 4.937 - 4.968: 99.4292% ( 1) 00:14:33.849 5.425 - 5.455: 99.4351% ( 1) 00:14:33.849 5.455 - 5.486: 99.4411% ( 1) 00:14:33.849 6.491 - 6.522: 99.4470% ( 1) 00:14:33.849 7.040 - 7.070: 99.4530% ( 1) 00:14:33.849 7.314 - 7.345: 99.4589% ( 1) 00:14:33.849 9.509 - 9.570: 99.4649% ( 1) 00:14:33.849 10.484 - 10.545: 99.4708% ( 1) 00:14:33.850 39.010 - 39.253: 99.4768% ( 1) 00:14:33.850 3073.950 - 3089.554: 99.4827% ( 1) 00:14:33.850 3978.971 - 3994.575: 99.4886% ( 1) 00:14:33.850 3994.575 - 4025.783: 99.9941% ( 85) 00:14:33.850 4025.783 - 4056.990: 100.0000% ( 1) 00:14:33.850 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:33.850 [ 00:14:33.850 { 00:14:33.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.850 "subtype": "Discovery", 00:14:33.850 "listen_addresses": [], 00:14:33.850 "allow_any_host": true, 00:14:33.850 "hosts": [] 00:14:33.850 }, 00:14:33.850 { 00:14:33.850 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:33.850 "subtype": "NVMe", 00:14:33.850 "listen_addresses": [ 00:14:33.850 { 00:14:33.850 "trtype": "VFIOUSER", 00:14:33.850 "adrfam": "IPv4", 00:14:33.850 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:33.850 "trsvcid": "0" 00:14:33.850 } 00:14:33.850 ], 00:14:33.850 "allow_any_host": true, 00:14:33.850 "hosts": [], 00:14:33.850 "serial_number": "SPDK1", 00:14:33.850 "model_number": "SPDK bdev Controller", 00:14:33.850 "max_namespaces": 32, 00:14:33.850 "min_cntlid": 1, 00:14:33.850 "max_cntlid": 65519, 00:14:33.850 "namespaces": [ 00:14:33.850 { 00:14:33.850 "nsid": 1, 00:14:33.850 "bdev_name": "Malloc1", 00:14:33.850 "name": "Malloc1", 00:14:33.850 "nguid": "5354282CAD28429B8A144A33A7027367", 00:14:33.850 "uuid": "5354282c-ad28-429b-8a14-4a33a7027367" 00:14:33.850 }, 00:14:33.850 { 00:14:33.850 "nsid": 2, 00:14:33.850 "bdev_name": "Malloc3", 00:14:33.850 "name": "Malloc3", 00:14:33.850 "nguid": "A97EC59B5D7140FF9C0D17A60075610B", 00:14:33.850 "uuid": "a97ec59b-5d71-40ff-9c0d-17a60075610b" 00:14:33.850 } 00:14:33.850 ] 00:14:33.850 }, 00:14:33.850 { 00:14:33.850 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:33.850 "subtype": "NVMe", 00:14:33.850 "listen_addresses": [ 00:14:33.850 { 00:14:33.850 "trtype": "VFIOUSER", 00:14:33.850 "adrfam": "IPv4", 00:14:33.850 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:33.850 "trsvcid": "0" 00:14:33.850 } 00:14:33.850 ], 00:14:33.850 "allow_any_host": true, 00:14:33.850 "hosts": [], 00:14:33.850 "serial_number": "SPDK2", 00:14:33.850 "model_number": "SPDK bdev Controller", 00:14:33.850 "max_namespaces": 32, 00:14:33.850 "min_cntlid": 1, 00:14:33.850 "max_cntlid": 65519, 00:14:33.850 "namespaces": [ 00:14:33.850 { 00:14:33.850 "nsid": 1, 00:14:33.850 "bdev_name": "Malloc2", 00:14:33.850 "name": "Malloc2", 00:14:33.850 "nguid": "EFBB9595A82549CF82DC4D986E5956BD", 00:14:33.850 "uuid": "efbb9595-a825-49cf-82dc-4d986e5956bd" 00:14:33.850 } 00:14:33.850 ] 00:14:33.850 } 00:14:33.850 ] 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2624418 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:33.850 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:34.109 [2024-11-20 09:53:07.555610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.109 Malloc4 00:14:34.109 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:34.369 [2024-11-20 09:53:07.790283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.369 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:34.369 Asynchronous Event Request test 00:14:34.369 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:34.369 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:34.369 Registering asynchronous event callbacks... 00:14:34.369 Starting namespace attribute notice tests for all controllers... 00:14:34.369 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:34.369 aer_cb - Changed Namespace 00:14:34.369 Cleaning up... 00:14:34.628 [ 00:14:34.628 { 00:14:34.628 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.628 "subtype": "Discovery", 00:14:34.628 "listen_addresses": [], 00:14:34.628 "allow_any_host": true, 00:14:34.628 "hosts": [] 00:14:34.628 }, 00:14:34.628 { 00:14:34.628 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:34.628 "subtype": "NVMe", 00:14:34.628 "listen_addresses": [ 00:14:34.628 { 00:14:34.628 "trtype": "VFIOUSER", 00:14:34.628 "adrfam": "IPv4", 00:14:34.628 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:34.628 "trsvcid": "0" 00:14:34.628 } 00:14:34.628 ], 00:14:34.628 "allow_any_host": true, 00:14:34.628 "hosts": [], 00:14:34.628 "serial_number": "SPDK1", 00:14:34.628 "model_number": "SPDK bdev Controller", 00:14:34.628 "max_namespaces": 32, 00:14:34.628 "min_cntlid": 1, 00:14:34.628 "max_cntlid": 65519, 00:14:34.628 "namespaces": [ 00:14:34.628 { 00:14:34.628 "nsid": 1, 00:14:34.628 "bdev_name": "Malloc1", 00:14:34.628 "name": "Malloc1", 00:14:34.628 "nguid": "5354282CAD28429B8A144A33A7027367", 00:14:34.628 "uuid": "5354282c-ad28-429b-8a14-4a33a7027367" 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "nsid": 2, 00:14:34.629 "bdev_name": "Malloc3", 00:14:34.629 "name": "Malloc3", 00:14:34.629 "nguid": "A97EC59B5D7140FF9C0D17A60075610B", 00:14:34.629 "uuid": "a97ec59b-5d71-40ff-9c0d-17a60075610b" 00:14:34.629 } 00:14:34.629 ] 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:34.629 "subtype": "NVMe", 00:14:34.629 "listen_addresses": [ 00:14:34.629 { 00:14:34.629 "trtype": "VFIOUSER", 00:14:34.629 "adrfam": "IPv4", 00:14:34.629 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:34.629 "trsvcid": "0" 00:14:34.629 } 00:14:34.629 ], 00:14:34.629 "allow_any_host": true, 00:14:34.629 "hosts": [], 00:14:34.629 "serial_number": "SPDK2", 00:14:34.629 "model_number": "SPDK bdev Controller", 00:14:34.629 "max_namespaces": 32, 00:14:34.629 "min_cntlid": 1, 00:14:34.629 "max_cntlid": 65519, 00:14:34.629 "namespaces": [ 00:14:34.629 { 00:14:34.629 "nsid": 1, 00:14:34.629 "bdev_name": "Malloc2", 00:14:34.629 "name": "Malloc2", 00:14:34.629 "nguid": "EFBB9595A82549CF82DC4D986E5956BD", 00:14:34.629 "uuid": "efbb9595-a825-49cf-82dc-4d986e5956bd" 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "nsid": 2, 00:14:34.629 "bdev_name": "Malloc4", 00:14:34.629 "name": "Malloc4", 00:14:34.629 "nguid": "2145F4CA19994E8AB922FE5114ECF550", 00:14:34.629 "uuid": "2145f4ca-1999-4e8a-b922-fe5114ecf550" 00:14:34.629 } 00:14:34.629 ] 00:14:34.629 } 00:14:34.629 ] 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2624418 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2616698 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2616698 ']' 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2616698 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616698 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616698' 00:14:34.629 killing process with pid 2616698 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2616698 00:14:34.629 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2616698 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2624496 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2624496' 00:14:34.888 Process pid: 2624496 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2624496 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2624496 ']' 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.888 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:34.888 [2024-11-20 09:53:08.345572] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:34.888 [2024-11-20 09:53:08.346437] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:14:34.888 [2024-11-20 09:53:08.346478] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.888 [2024-11-20 09:53:08.420092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.889 [2024-11-20 09:53:08.456650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.889 [2024-11-20 09:53:08.456690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.889 [2024-11-20 09:53:08.456697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.889 [2024-11-20 09:53:08.456702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.889 [2024-11-20 09:53:08.456707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.889 [2024-11-20 09:53:08.458236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.889 [2024-11-20 09:53:08.458291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.889 [2024-11-20 09:53:08.458399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.889 [2024-11-20 09:53:08.458400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.148 [2024-11-20 09:53:08.525529] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:35.148 [2024-11-20 09:53:08.526664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:35.148 [2024-11-20 09:53:08.526740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:35.148 [2024-11-20 09:53:08.526975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:35.148 [2024-11-20 09:53:08.527033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:35.148 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.148 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:35.148 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:36.085 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:36.345 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:36.345 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:36.345 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.345 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:36.345 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.604 Malloc1 00:14:36.604 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:36.863 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:36.863 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:37.122 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.122 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:37.122 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:37.381 Malloc2 00:14:37.381 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:37.640 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:37.640 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2624496 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2624496 ']' 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2624496 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2624496 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2624496' 00:14:37.900 killing process with pid 2624496 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2624496 00:14:37.900 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2624496 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:38.159 00:14:38.159 real 0m50.835s 00:14:38.159 user 3m16.694s 00:14:38.159 sys 0m3.118s 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:38.159 ************************************ 00:14:38.159 END TEST nvmf_vfio_user 00:14:38.159 ************************************ 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.159 09:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.419 ************************************ 00:14:38.419 START TEST nvmf_vfio_user_nvme_compliance 00:14:38.419 ************************************ 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:38.419 * Looking for test storage... 00:14:38.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.419 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:38.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.420 --rc genhtml_branch_coverage=1 00:14:38.420 --rc genhtml_function_coverage=1 00:14:38.420 --rc genhtml_legend=1 00:14:38.420 --rc geninfo_all_blocks=1 00:14:38.420 --rc geninfo_unexecuted_blocks=1 00:14:38.420 00:14:38.420 ' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:38.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.420 --rc genhtml_branch_coverage=1 00:14:38.420 --rc genhtml_function_coverage=1 00:14:38.420 --rc genhtml_legend=1 00:14:38.420 --rc geninfo_all_blocks=1 00:14:38.420 --rc geninfo_unexecuted_blocks=1 00:14:38.420 00:14:38.420 ' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:38.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.420 --rc genhtml_branch_coverage=1 00:14:38.420 --rc genhtml_function_coverage=1 00:14:38.420 --rc genhtml_legend=1 00:14:38.420 --rc geninfo_all_blocks=1 00:14:38.420 --rc geninfo_unexecuted_blocks=1 00:14:38.420 00:14:38.420 ' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:38.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.420 --rc genhtml_branch_coverage=1 00:14:38.420 --rc genhtml_function_coverage=1 00:14:38.420 --rc genhtml_legend=1 00:14:38.420 --rc geninfo_all_blocks=1 00:14:38.420 --rc geninfo_unexecuted_blocks=1 00:14:38.420 00:14:38.420 ' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:38.420 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2625251 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2625251' 00:14:38.421 Process pid: 2625251 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2625251 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2625251 ']' 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.421 09:53:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:38.421 [2024-11-20 09:53:11.988980] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:14:38.421 [2024-11-20 09:53:11.989028] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.680 [2024-11-20 09:53:12.062199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.680 [2024-11-20 09:53:12.101006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.680 [2024-11-20 09:53:12.101045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.680 [2024-11-20 09:53:12.101052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.680 [2024-11-20 09:53:12.101057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.680 [2024-11-20 09:53:12.101062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.680 [2024-11-20 09:53:12.102386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.680 [2024-11-20 09:53:12.102491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.680 [2024-11-20 09:53:12.102492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.680 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.680 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:38.680 09:53:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.057 malloc0 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:40.057 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.058 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:40.058 00:14:40.058 00:14:40.058 CUnit - A unit testing framework for C - Version 2.1-3 00:14:40.058 http://cunit.sourceforge.net/ 00:14:40.058 00:14:40.058 00:14:40.058 Suite: nvme_compliance 00:14:40.058 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:53:13.449662] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.058 [2024-11-20 09:53:13.450995] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:40.058 [2024-11-20 09:53:13.451010] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:40.058 [2024-11-20 09:53:13.451017] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:40.058 [2024-11-20 09:53:13.452685] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.058 passed 00:14:40.058 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:53:13.529200] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.058 [2024-11-20 09:53:13.532221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.058 passed 00:14:40.058 Test: admin_identify_ns ...[2024-11-20 09:53:13.610425] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.348 [2024-11-20 09:53:13.673216] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:40.348 [2024-11-20 09:53:13.681225] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:40.348 [2024-11-20 09:53:13.702300] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.348 passed 00:14:40.348 Test: admin_get_features_mandatory_features ...[2024-11-20 09:53:13.774874] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.348 [2024-11-20 09:53:13.777896] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.348 passed 00:14:40.348 Test: admin_get_features_optional_features ...[2024-11-20 09:53:13.855376] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.348 [2024-11-20 09:53:13.858399] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.348 passed 00:14:40.640 Test: admin_set_features_number_of_queues ...[2024-11-20 09:53:13.934477] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.641 [2024-11-20 09:53:14.039306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.641 passed 00:14:40.641 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:53:14.115630] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.641 [2024-11-20 09:53:14.118651] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.641 passed 00:14:40.641 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:53:14.196360] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.939 [2024-11-20 09:53:14.262212] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:40.939 [2024-11-20 09:53:14.275279] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.939 passed 00:14:40.939 Test: fabric_property_get ...[2024-11-20 09:53:14.350676] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.939 [2024-11-20 09:53:14.351911] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:40.939 [2024-11-20 09:53:14.353695] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.939 passed 00:14:40.939 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:53:14.433192] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:40.939 [2024-11-20 09:53:14.434437] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:40.939 [2024-11-20 09:53:14.436221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:40.939 passed 00:14:40.939 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:53:14.511559] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.197 [2024-11-20 09:53:14.596210] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.197 [2024-11-20 09:53:14.612209] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.197 [2024-11-20 09:53:14.617288] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.197 passed 00:14:41.197 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:53:14.690837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.197 [2024-11-20 09:53:14.692064] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:41.197 [2024-11-20 09:53:14.693861] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.197 passed 00:14:41.197 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:53:14.771420] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.456 [2024-11-20 09:53:14.847211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:41.456 [2024-11-20 09:53:14.871220] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:41.456 [2024-11-20 09:53:14.876280] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.456 passed 00:14:41.456 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:53:14.950596] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.456 [2024-11-20 09:53:14.952835] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:41.456 [2024-11-20 09:53:14.952858] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:41.456 [2024-11-20 09:53:14.954631] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.456 passed 00:14:41.456 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:53:15.029367] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.715 [2024-11-20 09:53:15.125208] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:41.715 [2024-11-20 09:53:15.133210] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:41.715 [2024-11-20 09:53:15.141208] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:41.715 [2024-11-20 09:53:15.149210] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:41.715 [2024-11-20 09:53:15.178283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.715 passed 00:14:41.715 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:53:15.250868] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:41.715 [2024-11-20 09:53:15.269219] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:41.715 [2024-11-20 09:53:15.287054] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:41.974 passed 00:14:41.974 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:53:15.358551] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:42.912 [2024-11-20 09:53:16.472211] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:43.480 [2024-11-20 09:53:16.838497] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.480 passed 00:14:43.480 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:53:16.914356] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:43.480 [2024-11-20 09:53:17.046207] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:43.739 [2024-11-20 09:53:17.083258] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:43.739 passed 00:14:43.739 00:14:43.739 Run Summary: Type Total Ran Passed Failed Inactive 00:14:43.739 suites 1 1 n/a 0 0 00:14:43.739 tests 18 18 18 0 0 00:14:43.739 asserts 360 360 360 0 n/a 00:14:43.739 00:14:43.739 Elapsed time = 1.495 seconds 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2625251 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2625251 ']' 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2625251 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625251 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625251' 00:14:43.739 killing process with pid 2625251 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2625251 00:14:43.739 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2625251 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:43.999 00:14:43.999 real 0m5.619s 00:14:43.999 user 0m15.698s 00:14:43.999 sys 0m0.526s 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:43.999 ************************************ 00:14:43.999 END TEST nvmf_vfio_user_nvme_compliance 00:14:43.999 ************************************ 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.999 ************************************ 00:14:43.999 START TEST nvmf_vfio_user_fuzz 00:14:43.999 ************************************ 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:43.999 * Looking for test storage... 00:14:43.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:43.999 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.259 --rc genhtml_branch_coverage=1 00:14:44.259 --rc genhtml_function_coverage=1 00:14:44.259 --rc genhtml_legend=1 00:14:44.259 --rc geninfo_all_blocks=1 00:14:44.259 --rc geninfo_unexecuted_blocks=1 00:14:44.259 00:14:44.259 ' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.259 --rc genhtml_branch_coverage=1 00:14:44.259 --rc genhtml_function_coverage=1 00:14:44.259 --rc genhtml_legend=1 00:14:44.259 --rc geninfo_all_blocks=1 00:14:44.259 --rc geninfo_unexecuted_blocks=1 00:14:44.259 00:14:44.259 ' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.259 --rc genhtml_branch_coverage=1 00:14:44.259 --rc genhtml_function_coverage=1 00:14:44.259 --rc genhtml_legend=1 00:14:44.259 --rc geninfo_all_blocks=1 00:14:44.259 --rc geninfo_unexecuted_blocks=1 00:14:44.259 00:14:44.259 ' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.259 --rc genhtml_branch_coverage=1 00:14:44.259 --rc genhtml_function_coverage=1 00:14:44.259 --rc genhtml_legend=1 00:14:44.259 --rc geninfo_all_blocks=1 00:14:44.259 --rc geninfo_unexecuted_blocks=1 00:14:44.259 00:14:44.259 ' 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.259 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2626246 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2626246' 00:14:44.260 Process pid: 2626246 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2626246 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2626246 ']' 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.260 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:44.519 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.519 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:44.519 09:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.457 malloc0 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.457 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:45.458 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:17.537 Fuzzing completed. Shutting down the fuzz application 00:15:17.537 00:15:17.537 Dumping successful admin opcodes: 00:15:17.537 8, 9, 10, 24, 00:15:17.537 Dumping successful io opcodes: 00:15:17.537 0, 00:15:17.537 NS: 0x20000081ef00 I/O qp, Total commands completed: 1009423, total successful commands: 3958, random_seed: 196001792 00:15:17.537 NS: 0x20000081ef00 admin qp, Total commands completed: 242847, total successful commands: 1953, random_seed: 1383361984 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2626246 ']' 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626246' 00:15:17.537 killing process with pid 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2626246 00:15:17.537 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:17.538 00:15:17.538 real 0m32.205s 00:15:17.538 user 0m29.275s 00:15:17.538 sys 0m31.522s 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.538 ************************************ 00:15:17.538 END TEST nvmf_vfio_user_fuzz 00:15:17.538 ************************************ 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.538 ************************************ 00:15:17.538 START TEST nvmf_auth_target 00:15:17.538 ************************************ 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:17.538 * Looking for test storage... 00:15:17.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:17.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.538 --rc genhtml_branch_coverage=1 00:15:17.538 --rc genhtml_function_coverage=1 00:15:17.538 --rc genhtml_legend=1 00:15:17.538 --rc geninfo_all_blocks=1 00:15:17.538 --rc geninfo_unexecuted_blocks=1 00:15:17.538 00:15:17.538 ' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:17.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.538 --rc genhtml_branch_coverage=1 00:15:17.538 --rc genhtml_function_coverage=1 00:15:17.538 --rc genhtml_legend=1 00:15:17.538 --rc geninfo_all_blocks=1 00:15:17.538 --rc geninfo_unexecuted_blocks=1 00:15:17.538 00:15:17.538 ' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:17.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.538 --rc genhtml_branch_coverage=1 00:15:17.538 --rc genhtml_function_coverage=1 00:15:17.538 --rc genhtml_legend=1 00:15:17.538 --rc geninfo_all_blocks=1 00:15:17.538 --rc geninfo_unexecuted_blocks=1 00:15:17.538 00:15:17.538 ' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:17.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.538 --rc genhtml_branch_coverage=1 00:15:17.538 --rc genhtml_function_coverage=1 00:15:17.538 --rc genhtml_legend=1 00:15:17.538 --rc geninfo_all_blocks=1 00:15:17.538 --rc geninfo_unexecuted_blocks=1 00:15:17.538 00:15:17.538 ' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.538 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:17.539 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.814 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:22.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:22.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:22.815 Found net devices under 0000:86:00.0: cvl_0_0 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:22.815 Found net devices under 0000:86:00.1: cvl_0_1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:22.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:15:22.815 00:15:22.815 --- 10.0.0.2 ping statistics --- 00:15:22.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.815 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:15:22.815 00:15:22.815 --- 10.0.0.1 ping statistics --- 00:15:22.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.815 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:22.815 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2634567 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2634567 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2634567 ']' 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.816 09:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2634625 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c40a65923c57c307ae75cd25188a0c5383667d174d20989d 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6Qz 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c40a65923c57c307ae75cd25188a0c5383667d174d20989d 0 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c40a65923c57c307ae75cd25188a0c5383667d174d20989d 0 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c40a65923c57c307ae75cd25188a0c5383667d174d20989d 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6Qz 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6Qz 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.6Qz 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b3a72f766d7400009348bdf201ab063057f81ff1c22608a9a0885046995821f2 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vRr 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b3a72f766d7400009348bdf201ab063057f81ff1c22608a9a0885046995821f2 3 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b3a72f766d7400009348bdf201ab063057f81ff1c22608a9a0885046995821f2 3 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b3a72f766d7400009348bdf201ab063057f81ff1c22608a9a0885046995821f2 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vRr 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vRr 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vRr 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f16ed3a2f94ca50a26854592c5a2d1e 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kYv 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f16ed3a2f94ca50a26854592c5a2d1e 1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f16ed3a2f94ca50a26854592c5a2d1e 1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f16ed3a2f94ca50a26854592c5a2d1e 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:22.816 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kYv 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kYv 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kYv 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b16dc50a927b2627a64b9c752715b7216a5b94d83ca1fc02 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vgI 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b16dc50a927b2627a64b9c752715b7216a5b94d83ca1fc02 2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b16dc50a927b2627a64b9c752715b7216a5b94d83ca1fc02 2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b16dc50a927b2627a64b9c752715b7216a5b94d83ca1fc02 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vgI 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vgI 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.vgI 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d696ca7cb2eae83434f797f34ef539882570c064da88285d 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.d9Z 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d696ca7cb2eae83434f797f34ef539882570c064da88285d 2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d696ca7cb2eae83434f797f34ef539882570c064da88285d 2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d696ca7cb2eae83434f797f34ef539882570c064da88285d 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.d9Z 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.d9Z 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.d9Z 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4d925223dd6f6427353081859489bbf 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.N1u 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4d925223dd6f6427353081859489bbf 1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4d925223dd6f6427353081859489bbf 1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4d925223dd6f6427353081859489bbf 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.N1u 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.N1u 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.N1u 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fbb2062facd93cac4645b2579b6afe9865d371fae1191616433ec91dfcdf84c2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xhE 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fbb2062facd93cac4645b2579b6afe9865d371fae1191616433ec91dfcdf84c2 3 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fbb2062facd93cac4645b2579b6afe9865d371fae1191616433ec91dfcdf84c2 3 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fbb2062facd93cac4645b2579b6afe9865d371fae1191616433ec91dfcdf84c2 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xhE 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xhE 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xhE 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2634567 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2634567 ']' 00:15:23.076 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.077 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.077 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.077 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.077 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2634625 /var/tmp/host.sock 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2634625 ']' 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:23.335 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.336 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:23.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:23.336 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.336 09:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6Qz 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.594 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.595 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.595 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6Qz 00:15:23.595 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6Qz 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vRr ]] 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vRr 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vRr 00:15:23.853 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vRr 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kYv 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kYv 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kYv 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.vgI ]] 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vgI 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vgI 00:15:24.112 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vgI 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.d9Z 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.d9Z 00:15:24.371 09:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.d9Z 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.N1u ]] 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.N1u 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.N1u 00:15:24.630 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.N1u 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xhE 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xhE 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xhE 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.889 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.149 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.409 00:15:25.409 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.409 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.409 09:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.667 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.668 { 00:15:25.668 "cntlid": 1, 00:15:25.668 "qid": 0, 00:15:25.668 "state": "enabled", 00:15:25.668 "thread": "nvmf_tgt_poll_group_000", 00:15:25.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:25.668 "listen_address": { 00:15:25.668 "trtype": "TCP", 00:15:25.668 "adrfam": "IPv4", 00:15:25.668 "traddr": "10.0.0.2", 00:15:25.668 "trsvcid": "4420" 00:15:25.668 }, 00:15:25.668 "peer_address": { 00:15:25.668 "trtype": "TCP", 00:15:25.668 "adrfam": "IPv4", 00:15:25.668 "traddr": "10.0.0.1", 00:15:25.668 "trsvcid": "38736" 00:15:25.668 }, 00:15:25.668 "auth": { 00:15:25.668 "state": "completed", 00:15:25.668 "digest": "sha256", 00:15:25.668 "dhgroup": "null" 00:15:25.668 } 00:15:25.668 } 00:15:25.668 ]' 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:25.668 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.926 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.926 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.926 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.926 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:25.927 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:26.495 09:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:26.495 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.754 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.013 00:15:27.013 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.013 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.013 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.272 { 00:15:27.272 "cntlid": 3, 00:15:27.272 "qid": 0, 00:15:27.272 "state": "enabled", 00:15:27.272 "thread": "nvmf_tgt_poll_group_000", 00:15:27.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:27.272 "listen_address": { 00:15:27.272 "trtype": "TCP", 00:15:27.272 "adrfam": "IPv4", 00:15:27.272 "traddr": "10.0.0.2", 00:15:27.272 "trsvcid": "4420" 00:15:27.272 }, 00:15:27.272 "peer_address": { 00:15:27.272 "trtype": "TCP", 00:15:27.272 "adrfam": "IPv4", 00:15:27.272 "traddr": "10.0.0.1", 00:15:27.272 "trsvcid": "42276" 00:15:27.272 }, 00:15:27.272 "auth": { 00:15:27.272 "state": "completed", 00:15:27.272 "digest": "sha256", 00:15:27.272 "dhgroup": "null" 00:15:27.272 } 00:15:27.272 } 00:15:27.272 ]' 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.272 09:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.530 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:27.531 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.097 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.356 09:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.614 00:15:28.614 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.614 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.614 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.873 { 00:15:28.873 "cntlid": 5, 00:15:28.873 "qid": 0, 00:15:28.873 "state": "enabled", 00:15:28.873 "thread": "nvmf_tgt_poll_group_000", 00:15:28.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:28.873 "listen_address": { 00:15:28.873 "trtype": "TCP", 00:15:28.873 "adrfam": "IPv4", 00:15:28.873 "traddr": "10.0.0.2", 00:15:28.873 "trsvcid": "4420" 00:15:28.873 }, 00:15:28.873 "peer_address": { 00:15:28.873 "trtype": "TCP", 00:15:28.873 "adrfam": "IPv4", 00:15:28.873 "traddr": "10.0.0.1", 00:15:28.873 "trsvcid": "42284" 00:15:28.873 }, 00:15:28.873 "auth": { 00:15:28.873 "state": "completed", 00:15:28.873 "digest": "sha256", 00:15:28.873 "dhgroup": "null" 00:15:28.873 } 00:15:28.873 } 00:15:28.873 ]' 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.873 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.132 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:29.132 09:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:29.699 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.699 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:29.699 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.700 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.700 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.700 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.700 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.700 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.959 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.217 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.217 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.475 { 00:15:30.475 "cntlid": 7, 00:15:30.475 "qid": 0, 00:15:30.475 "state": "enabled", 00:15:30.475 "thread": "nvmf_tgt_poll_group_000", 00:15:30.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:30.475 "listen_address": { 00:15:30.475 "trtype": "TCP", 00:15:30.475 "adrfam": "IPv4", 00:15:30.475 "traddr": "10.0.0.2", 00:15:30.475 "trsvcid": "4420" 00:15:30.475 }, 00:15:30.475 "peer_address": { 00:15:30.475 "trtype": "TCP", 00:15:30.475 "adrfam": "IPv4", 00:15:30.475 "traddr": "10.0.0.1", 00:15:30.475 "trsvcid": "42320" 00:15:30.475 }, 00:15:30.475 "auth": { 00:15:30.475 "state": "completed", 00:15:30.475 "digest": "sha256", 00:15:30.475 "dhgroup": "null" 00:15:30.475 } 00:15:30.475 } 00:15:30.475 ]' 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.475 09:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.734 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:30.734 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:31.300 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.300 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:31.300 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.301 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.559 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.560 09:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.560 00:15:31.818 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.819 { 00:15:31.819 "cntlid": 9, 00:15:31.819 "qid": 0, 00:15:31.819 "state": "enabled", 00:15:31.819 "thread": "nvmf_tgt_poll_group_000", 00:15:31.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:31.819 "listen_address": { 00:15:31.819 "trtype": "TCP", 00:15:31.819 "adrfam": "IPv4", 00:15:31.819 "traddr": "10.0.0.2", 00:15:31.819 "trsvcid": "4420" 00:15:31.819 }, 00:15:31.819 "peer_address": { 00:15:31.819 "trtype": "TCP", 00:15:31.819 "adrfam": "IPv4", 00:15:31.819 "traddr": "10.0.0.1", 00:15:31.819 "trsvcid": "42348" 00:15:31.819 }, 00:15:31.819 "auth": { 00:15:31.819 "state": "completed", 00:15:31.819 "digest": "sha256", 00:15:31.819 "dhgroup": "ffdhe2048" 00:15:31.819 } 00:15:31.819 } 00:15:31.819 ]' 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.819 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.078 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.078 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.078 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.078 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.078 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.338 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:32.338 09:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.905 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.906 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.906 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.906 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.165 00:15:33.165 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.165 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.165 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.424 { 00:15:33.424 "cntlid": 11, 00:15:33.424 "qid": 0, 00:15:33.424 "state": "enabled", 00:15:33.424 "thread": "nvmf_tgt_poll_group_000", 00:15:33.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:33.424 "listen_address": { 00:15:33.424 "trtype": "TCP", 00:15:33.424 "adrfam": "IPv4", 00:15:33.424 "traddr": "10.0.0.2", 00:15:33.424 "trsvcid": "4420" 00:15:33.424 }, 00:15:33.424 "peer_address": { 00:15:33.424 "trtype": "TCP", 00:15:33.424 "adrfam": "IPv4", 00:15:33.424 "traddr": "10.0.0.1", 00:15:33.424 "trsvcid": "42372" 00:15:33.424 }, 00:15:33.424 "auth": { 00:15:33.424 "state": "completed", 00:15:33.424 "digest": "sha256", 00:15:33.424 "dhgroup": "ffdhe2048" 00:15:33.424 } 00:15:33.424 } 00:15:33.424 ]' 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.424 09:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.683 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:33.683 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.683 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.683 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.683 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.684 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:33.684 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.251 09:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.510 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.769 00:15:34.769 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.769 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.769 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.028 { 00:15:35.028 "cntlid": 13, 00:15:35.028 "qid": 0, 00:15:35.028 "state": "enabled", 00:15:35.028 "thread": "nvmf_tgt_poll_group_000", 00:15:35.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:35.028 "listen_address": { 00:15:35.028 "trtype": "TCP", 00:15:35.028 "adrfam": "IPv4", 00:15:35.028 "traddr": "10.0.0.2", 00:15:35.028 "trsvcid": "4420" 00:15:35.028 }, 00:15:35.028 "peer_address": { 00:15:35.028 "trtype": "TCP", 00:15:35.028 "adrfam": "IPv4", 00:15:35.028 "traddr": "10.0.0.1", 00:15:35.028 "trsvcid": "42406" 00:15:35.028 }, 00:15:35.028 "auth": { 00:15:35.028 "state": "completed", 00:15:35.028 "digest": "sha256", 00:15:35.028 "dhgroup": "ffdhe2048" 00:15:35.028 } 00:15:35.028 } 00:15:35.028 ]' 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.028 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.287 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.287 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.287 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.287 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:35.287 09:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:35.855 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.114 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.373 00:15:36.373 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.373 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.374 09:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.633 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.634 { 00:15:36.634 "cntlid": 15, 00:15:36.634 "qid": 0, 00:15:36.634 "state": "enabled", 00:15:36.634 "thread": "nvmf_tgt_poll_group_000", 00:15:36.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:36.634 "listen_address": { 00:15:36.634 "trtype": "TCP", 00:15:36.634 "adrfam": "IPv4", 00:15:36.634 "traddr": "10.0.0.2", 00:15:36.634 "trsvcid": "4420" 00:15:36.634 }, 00:15:36.634 "peer_address": { 00:15:36.634 "trtype": "TCP", 00:15:36.634 "adrfam": "IPv4", 00:15:36.634 "traddr": "10.0.0.1", 00:15:36.634 "trsvcid": "47788" 00:15:36.634 }, 00:15:36.634 "auth": { 00:15:36.634 "state": "completed", 00:15:36.634 "digest": "sha256", 00:15:36.634 "dhgroup": "ffdhe2048" 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ]' 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.634 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.893 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:36.893 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:37.461 09:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.461 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.720 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.721 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.980 00:15:37.980 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.980 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.980 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.239 { 00:15:38.239 "cntlid": 17, 00:15:38.239 "qid": 0, 00:15:38.239 "state": "enabled", 00:15:38.239 "thread": "nvmf_tgt_poll_group_000", 00:15:38.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:38.239 "listen_address": { 00:15:38.239 "trtype": "TCP", 00:15:38.239 "adrfam": "IPv4", 00:15:38.239 "traddr": "10.0.0.2", 00:15:38.239 "trsvcid": "4420" 00:15:38.239 }, 00:15:38.239 "peer_address": { 00:15:38.239 "trtype": "TCP", 00:15:38.239 "adrfam": "IPv4", 00:15:38.239 "traddr": "10.0.0.1", 00:15:38.239 "trsvcid": "47816" 00:15:38.239 }, 00:15:38.239 "auth": { 00:15:38.239 "state": "completed", 00:15:38.239 "digest": "sha256", 00:15:38.239 "dhgroup": "ffdhe3072" 00:15:38.239 } 00:15:38.239 } 00:15:38.239 ]' 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:38.239 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.499 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.499 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.499 09:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.499 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:38.499 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.067 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.327 09:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.586 00:15:39.587 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.587 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.587 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.846 { 00:15:39.846 "cntlid": 19, 00:15:39.846 "qid": 0, 00:15:39.846 "state": "enabled", 00:15:39.846 "thread": "nvmf_tgt_poll_group_000", 00:15:39.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:39.846 "listen_address": { 00:15:39.846 "trtype": "TCP", 00:15:39.846 "adrfam": "IPv4", 00:15:39.846 "traddr": "10.0.0.2", 00:15:39.846 "trsvcid": "4420" 00:15:39.846 }, 00:15:39.846 "peer_address": { 00:15:39.846 "trtype": "TCP", 00:15:39.846 "adrfam": "IPv4", 00:15:39.846 "traddr": "10.0.0.1", 00:15:39.846 "trsvcid": "47826" 00:15:39.846 }, 00:15:39.846 "auth": { 00:15:39.846 "state": "completed", 00:15:39.846 "digest": "sha256", 00:15:39.846 "dhgroup": "ffdhe3072" 00:15:39.846 } 00:15:39.846 } 00:15:39.846 ]' 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.846 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.105 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:40.105 09:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.673 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:40.933 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:40.933 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.933 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.933 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.933 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.934 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.192 00:15:41.192 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.192 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.192 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.451 { 00:15:41.451 "cntlid": 21, 00:15:41.451 "qid": 0, 00:15:41.451 "state": "enabled", 00:15:41.451 "thread": "nvmf_tgt_poll_group_000", 00:15:41.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:41.451 "listen_address": { 00:15:41.451 "trtype": "TCP", 00:15:41.451 "adrfam": "IPv4", 00:15:41.451 "traddr": "10.0.0.2", 00:15:41.451 "trsvcid": "4420" 00:15:41.451 }, 00:15:41.451 "peer_address": { 00:15:41.451 "trtype": "TCP", 00:15:41.451 "adrfam": "IPv4", 00:15:41.451 "traddr": "10.0.0.1", 00:15:41.451 "trsvcid": "47858" 00:15:41.451 }, 00:15:41.451 "auth": { 00:15:41.451 "state": "completed", 00:15:41.451 "digest": "sha256", 00:15:41.451 "dhgroup": "ffdhe3072" 00:15:41.451 } 00:15:41.451 } 00:15:41.451 ]' 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.451 09:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.711 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:41.711 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:42.279 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.538 09:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.797 00:15:42.797 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.797 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.797 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.056 { 00:15:43.056 "cntlid": 23, 00:15:43.056 "qid": 0, 00:15:43.056 "state": "enabled", 00:15:43.056 "thread": "nvmf_tgt_poll_group_000", 00:15:43.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:43.056 "listen_address": { 00:15:43.056 "trtype": "TCP", 00:15:43.056 "adrfam": "IPv4", 00:15:43.056 "traddr": "10.0.0.2", 00:15:43.056 "trsvcid": "4420" 00:15:43.056 }, 00:15:43.056 "peer_address": { 00:15:43.056 "trtype": "TCP", 00:15:43.056 "adrfam": "IPv4", 00:15:43.056 "traddr": "10.0.0.1", 00:15:43.056 "trsvcid": "47896" 00:15:43.056 }, 00:15:43.056 "auth": { 00:15:43.056 "state": "completed", 00:15:43.056 "digest": "sha256", 00:15:43.056 "dhgroup": "ffdhe3072" 00:15:43.056 } 00:15:43.056 } 00:15:43.056 ]' 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.056 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.315 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:43.315 09:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.883 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.143 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.403 00:15:44.403 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.403 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.403 09:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.662 { 00:15:44.662 "cntlid": 25, 00:15:44.662 "qid": 0, 00:15:44.662 "state": "enabled", 00:15:44.662 "thread": "nvmf_tgt_poll_group_000", 00:15:44.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.662 "listen_address": { 00:15:44.662 "trtype": "TCP", 00:15:44.662 "adrfam": "IPv4", 00:15:44.662 "traddr": "10.0.0.2", 00:15:44.662 "trsvcid": "4420" 00:15:44.662 }, 00:15:44.662 "peer_address": { 00:15:44.662 "trtype": "TCP", 00:15:44.662 "adrfam": "IPv4", 00:15:44.662 "traddr": "10.0.0.1", 00:15:44.662 "trsvcid": "47928" 00:15:44.662 }, 00:15:44.662 "auth": { 00:15:44.662 "state": "completed", 00:15:44.662 "digest": "sha256", 00:15:44.662 "dhgroup": "ffdhe4096" 00:15:44.662 } 00:15:44.662 } 00:15:44.662 ]' 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.662 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.921 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:44.921 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:45.517 09:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.864 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.864 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.122 { 00:15:46.122 "cntlid": 27, 00:15:46.122 "qid": 0, 00:15:46.122 "state": "enabled", 00:15:46.122 "thread": "nvmf_tgt_poll_group_000", 00:15:46.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.122 "listen_address": { 00:15:46.122 "trtype": "TCP", 00:15:46.122 "adrfam": "IPv4", 00:15:46.122 "traddr": "10.0.0.2", 00:15:46.122 "trsvcid": "4420" 00:15:46.122 }, 00:15:46.122 "peer_address": { 00:15:46.122 "trtype": "TCP", 00:15:46.122 "adrfam": "IPv4", 00:15:46.122 "traddr": "10.0.0.1", 00:15:46.122 "trsvcid": "47950" 00:15:46.122 }, 00:15:46.122 "auth": { 00:15:46.122 "state": "completed", 00:15:46.122 "digest": "sha256", 00:15:46.122 "dhgroup": "ffdhe4096" 00:15:46.122 } 00:15:46.122 } 00:15:46.122 ]' 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.122 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:46.381 09:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:46.948 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.207 09:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.467 00:15:47.467 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.467 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.467 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.726 { 00:15:47.726 "cntlid": 29, 00:15:47.726 "qid": 0, 00:15:47.726 "state": "enabled", 00:15:47.726 "thread": "nvmf_tgt_poll_group_000", 00:15:47.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:47.726 "listen_address": { 00:15:47.726 "trtype": "TCP", 00:15:47.726 "adrfam": "IPv4", 00:15:47.726 "traddr": "10.0.0.2", 00:15:47.726 "trsvcid": "4420" 00:15:47.726 }, 00:15:47.726 "peer_address": { 00:15:47.726 "trtype": "TCP", 00:15:47.726 "adrfam": "IPv4", 00:15:47.726 "traddr": "10.0.0.1", 00:15:47.726 "trsvcid": "49982" 00:15:47.726 }, 00:15:47.726 "auth": { 00:15:47.726 "state": "completed", 00:15:47.726 "digest": "sha256", 00:15:47.726 "dhgroup": "ffdhe4096" 00:15:47.726 } 00:15:47.726 } 00:15:47.726 ]' 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.726 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.985 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.985 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.985 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.985 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:47.985 09:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:48.552 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.553 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.814 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.074 00:15:49.074 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.074 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.074 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.333 { 00:15:49.333 "cntlid": 31, 00:15:49.333 "qid": 0, 00:15:49.333 "state": "enabled", 00:15:49.333 "thread": "nvmf_tgt_poll_group_000", 00:15:49.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:49.333 "listen_address": { 00:15:49.333 "trtype": "TCP", 00:15:49.333 "adrfam": "IPv4", 00:15:49.333 "traddr": "10.0.0.2", 00:15:49.333 "trsvcid": "4420" 00:15:49.333 }, 00:15:49.333 "peer_address": { 00:15:49.333 "trtype": "TCP", 00:15:49.333 "adrfam": "IPv4", 00:15:49.333 "traddr": "10.0.0.1", 00:15:49.333 "trsvcid": "50010" 00:15:49.333 }, 00:15:49.333 "auth": { 00:15:49.333 "state": "completed", 00:15:49.333 "digest": "sha256", 00:15:49.333 "dhgroup": "ffdhe4096" 00:15:49.333 } 00:15:49.333 } 00:15:49.333 ]' 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.333 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.593 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.593 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.593 09:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.593 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:49.593 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.161 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.420 09:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.679 00:15:50.679 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.679 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.679 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.937 { 00:15:50.937 "cntlid": 33, 00:15:50.937 "qid": 0, 00:15:50.937 "state": "enabled", 00:15:50.937 "thread": "nvmf_tgt_poll_group_000", 00:15:50.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:50.937 "listen_address": { 00:15:50.937 "trtype": "TCP", 00:15:50.937 "adrfam": "IPv4", 00:15:50.937 "traddr": "10.0.0.2", 00:15:50.937 "trsvcid": "4420" 00:15:50.937 }, 00:15:50.937 "peer_address": { 00:15:50.937 "trtype": "TCP", 00:15:50.937 "adrfam": "IPv4", 00:15:50.937 "traddr": "10.0.0.1", 00:15:50.937 "trsvcid": "50036" 00:15:50.937 }, 00:15:50.937 "auth": { 00:15:50.937 "state": "completed", 00:15:50.937 "digest": "sha256", 00:15:50.937 "dhgroup": "ffdhe6144" 00:15:50.937 } 00:15:50.937 } 00:15:50.937 ]' 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.937 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.196 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.196 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.196 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.196 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.196 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.454 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:51.454 09:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.023 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.592 00:15:52.592 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.592 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.592 09:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.592 { 00:15:52.592 "cntlid": 35, 00:15:52.592 "qid": 0, 00:15:52.592 "state": "enabled", 00:15:52.592 "thread": "nvmf_tgt_poll_group_000", 00:15:52.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:52.592 "listen_address": { 00:15:52.592 "trtype": "TCP", 00:15:52.592 "adrfam": "IPv4", 00:15:52.592 "traddr": "10.0.0.2", 00:15:52.592 "trsvcid": "4420" 00:15:52.592 }, 00:15:52.592 "peer_address": { 00:15:52.592 "trtype": "TCP", 00:15:52.592 "adrfam": "IPv4", 00:15:52.592 "traddr": "10.0.0.1", 00:15:52.592 "trsvcid": "50050" 00:15:52.592 }, 00:15:52.592 "auth": { 00:15:52.592 "state": "completed", 00:15:52.592 "digest": "sha256", 00:15:52.592 "dhgroup": "ffdhe6144" 00:15:52.592 } 00:15:52.592 } 00:15:52.592 ]' 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.592 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.851 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.851 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.851 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.851 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.851 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.852 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:52.852 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:53.420 09:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.679 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.938 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.197 { 00:15:54.197 "cntlid": 37, 00:15:54.197 "qid": 0, 00:15:54.197 "state": "enabled", 00:15:54.197 "thread": "nvmf_tgt_poll_group_000", 00:15:54.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:54.197 "listen_address": { 00:15:54.197 "trtype": "TCP", 00:15:54.197 "adrfam": "IPv4", 00:15:54.197 "traddr": "10.0.0.2", 00:15:54.197 "trsvcid": "4420" 00:15:54.197 }, 00:15:54.197 "peer_address": { 00:15:54.197 "trtype": "TCP", 00:15:54.197 "adrfam": "IPv4", 00:15:54.197 "traddr": "10.0.0.1", 00:15:54.197 "trsvcid": "50086" 00:15:54.197 }, 00:15:54.197 "auth": { 00:15:54.197 "state": "completed", 00:15:54.197 "digest": "sha256", 00:15:54.197 "dhgroup": "ffdhe6144" 00:15:54.197 } 00:15:54.197 } 00:15:54.197 ]' 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.197 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.456 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.456 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.456 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.456 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.456 09:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.715 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:54.715 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.283 09:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.851 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.851 { 00:15:55.851 "cntlid": 39, 00:15:55.851 "qid": 0, 00:15:55.851 "state": "enabled", 00:15:55.851 "thread": "nvmf_tgt_poll_group_000", 00:15:55.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:55.851 "listen_address": { 00:15:55.851 "trtype": "TCP", 00:15:55.851 "adrfam": "IPv4", 00:15:55.851 "traddr": "10.0.0.2", 00:15:55.851 "trsvcid": "4420" 00:15:55.851 }, 00:15:55.851 "peer_address": { 00:15:55.851 "trtype": "TCP", 00:15:55.851 "adrfam": "IPv4", 00:15:55.851 "traddr": "10.0.0.1", 00:15:55.851 "trsvcid": "50108" 00:15:55.851 }, 00:15:55.851 "auth": { 00:15:55.851 "state": "completed", 00:15:55.851 "digest": "sha256", 00:15:55.851 "dhgroup": "ffdhe6144" 00:15:55.851 } 00:15:55.851 } 00:15:55.851 ]' 00:15:55.851 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.110 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.369 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:56.369 09:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:56.937 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.196 09:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.454 00:15:57.454 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.454 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.454 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.712 { 00:15:57.712 "cntlid": 41, 00:15:57.712 "qid": 0, 00:15:57.712 "state": "enabled", 00:15:57.712 "thread": "nvmf_tgt_poll_group_000", 00:15:57.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:57.712 "listen_address": { 00:15:57.712 "trtype": "TCP", 00:15:57.712 "adrfam": "IPv4", 00:15:57.712 "traddr": "10.0.0.2", 00:15:57.712 "trsvcid": "4420" 00:15:57.712 }, 00:15:57.712 "peer_address": { 00:15:57.712 "trtype": "TCP", 00:15:57.712 "adrfam": "IPv4", 00:15:57.712 "traddr": "10.0.0.1", 00:15:57.712 "trsvcid": "41802" 00:15:57.712 }, 00:15:57.712 "auth": { 00:15:57.712 "state": "completed", 00:15:57.712 "digest": "sha256", 00:15:57.712 "dhgroup": "ffdhe8192" 00:15:57.712 } 00:15:57.712 } 00:15:57.712 ]' 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.712 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.971 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:57.971 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.971 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.971 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.972 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.972 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:57.972 09:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:58.539 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.798 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.365 00:15:59.365 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.365 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.365 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.625 { 00:15:59.625 "cntlid": 43, 00:15:59.625 "qid": 0, 00:15:59.625 "state": "enabled", 00:15:59.625 "thread": "nvmf_tgt_poll_group_000", 00:15:59.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:59.625 "listen_address": { 00:15:59.625 "trtype": "TCP", 00:15:59.625 "adrfam": "IPv4", 00:15:59.625 "traddr": "10.0.0.2", 00:15:59.625 "trsvcid": "4420" 00:15:59.625 }, 00:15:59.625 "peer_address": { 00:15:59.625 "trtype": "TCP", 00:15:59.625 "adrfam": "IPv4", 00:15:59.625 "traddr": "10.0.0.1", 00:15:59.625 "trsvcid": "41812" 00:15:59.625 }, 00:15:59.625 "auth": { 00:15:59.625 "state": "completed", 00:15:59.625 "digest": "sha256", 00:15:59.625 "dhgroup": "ffdhe8192" 00:15:59.625 } 00:15:59.625 } 00:15:59.625 ]' 00:15:59.625 09:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.625 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.883 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:15:59.883 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.450 09:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.710 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.278 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.278 { 00:16:01.278 "cntlid": 45, 00:16:01.278 "qid": 0, 00:16:01.278 "state": "enabled", 00:16:01.278 "thread": "nvmf_tgt_poll_group_000", 00:16:01.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.278 "listen_address": { 00:16:01.278 "trtype": "TCP", 00:16:01.278 "adrfam": "IPv4", 00:16:01.278 "traddr": "10.0.0.2", 00:16:01.278 "trsvcid": "4420" 00:16:01.278 }, 00:16:01.278 "peer_address": { 00:16:01.278 "trtype": "TCP", 00:16:01.278 "adrfam": "IPv4", 00:16:01.278 "traddr": "10.0.0.1", 00:16:01.278 "trsvcid": "41842" 00:16:01.278 }, 00:16:01.278 "auth": { 00:16:01.278 "state": "completed", 00:16:01.278 "digest": "sha256", 00:16:01.278 "dhgroup": "ffdhe8192" 00:16:01.278 } 00:16:01.278 } 00:16:01.278 ]' 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.278 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.538 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.538 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.538 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.538 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.539 09:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.539 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:01.539 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:02.106 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.106 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.106 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.106 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.366 09:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.934 00:16:02.934 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.934 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.934 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.193 { 00:16:03.193 "cntlid": 47, 00:16:03.193 "qid": 0, 00:16:03.193 "state": "enabled", 00:16:03.193 "thread": "nvmf_tgt_poll_group_000", 00:16:03.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.193 "listen_address": { 00:16:03.193 "trtype": "TCP", 00:16:03.193 "adrfam": "IPv4", 00:16:03.193 "traddr": "10.0.0.2", 00:16:03.193 "trsvcid": "4420" 00:16:03.193 }, 00:16:03.193 "peer_address": { 00:16:03.193 "trtype": "TCP", 00:16:03.193 "adrfam": "IPv4", 00:16:03.193 "traddr": "10.0.0.1", 00:16:03.193 "trsvcid": "41876" 00:16:03.193 }, 00:16:03.193 "auth": { 00:16:03.193 "state": "completed", 00:16:03.193 "digest": "sha256", 00:16:03.193 "dhgroup": "ffdhe8192" 00:16:03.193 } 00:16:03.193 } 00:16:03.193 ]' 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.193 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.452 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:03.452 09:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.026 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.290 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.549 00:16:04.549 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.549 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.549 09:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.808 { 00:16:04.808 "cntlid": 49, 00:16:04.808 "qid": 0, 00:16:04.808 "state": "enabled", 00:16:04.808 "thread": "nvmf_tgt_poll_group_000", 00:16:04.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:04.808 "listen_address": { 00:16:04.808 "trtype": "TCP", 00:16:04.808 "adrfam": "IPv4", 00:16:04.808 "traddr": "10.0.0.2", 00:16:04.808 "trsvcid": "4420" 00:16:04.808 }, 00:16:04.808 "peer_address": { 00:16:04.808 "trtype": "TCP", 00:16:04.808 "adrfam": "IPv4", 00:16:04.808 "traddr": "10.0.0.1", 00:16:04.808 "trsvcid": "41904" 00:16:04.808 }, 00:16:04.808 "auth": { 00:16:04.808 "state": "completed", 00:16:04.808 "digest": "sha384", 00:16:04.808 "dhgroup": "null" 00:16:04.808 } 00:16:04.808 } 00:16:04.808 ]' 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.808 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.066 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:05.067 09:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:05.634 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.892 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.150 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.150 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.411 { 00:16:06.411 "cntlid": 51, 00:16:06.411 "qid": 0, 00:16:06.411 "state": "enabled", 00:16:06.411 "thread": "nvmf_tgt_poll_group_000", 00:16:06.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.411 "listen_address": { 00:16:06.411 "trtype": "TCP", 00:16:06.411 "adrfam": "IPv4", 00:16:06.411 "traddr": "10.0.0.2", 00:16:06.411 "trsvcid": "4420" 00:16:06.411 }, 00:16:06.411 "peer_address": { 00:16:06.411 "trtype": "TCP", 00:16:06.411 "adrfam": "IPv4", 00:16:06.411 "traddr": "10.0.0.1", 00:16:06.411 "trsvcid": "41940" 00:16:06.411 }, 00:16:06.411 "auth": { 00:16:06.411 "state": "completed", 00:16:06.411 "digest": "sha384", 00:16:06.411 "dhgroup": "null" 00:16:06.411 } 00:16:06.411 } 00:16:06.411 ]' 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.411 09:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.670 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:06.670 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:07.239 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.500 09:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.500 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.767 { 00:16:07.767 "cntlid": 53, 00:16:07.767 "qid": 0, 00:16:07.767 "state": "enabled", 00:16:07.767 "thread": "nvmf_tgt_poll_group_000", 00:16:07.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:07.767 "listen_address": { 00:16:07.767 "trtype": "TCP", 00:16:07.767 "adrfam": "IPv4", 00:16:07.767 "traddr": "10.0.0.2", 00:16:07.767 "trsvcid": "4420" 00:16:07.767 }, 00:16:07.767 "peer_address": { 00:16:07.767 "trtype": "TCP", 00:16:07.767 "adrfam": "IPv4", 00:16:07.767 "traddr": "10.0.0.1", 00:16:07.767 "trsvcid": "38974" 00:16:07.767 }, 00:16:07.767 "auth": { 00:16:07.767 "state": "completed", 00:16:07.767 "digest": "sha384", 00:16:07.767 "dhgroup": "null" 00:16:07.767 } 00:16:07.767 } 00:16:07.767 ]' 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.767 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.026 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.026 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.026 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.026 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.026 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.285 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:08.285 09:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:08.853 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.853 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.853 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.853 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.853 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.854 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.113 00:16:09.113 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.113 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.113 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.372 { 00:16:09.372 "cntlid": 55, 00:16:09.372 "qid": 0, 00:16:09.372 "state": "enabled", 00:16:09.372 "thread": "nvmf_tgt_poll_group_000", 00:16:09.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.372 "listen_address": { 00:16:09.372 "trtype": "TCP", 00:16:09.372 "adrfam": "IPv4", 00:16:09.372 "traddr": "10.0.0.2", 00:16:09.372 "trsvcid": "4420" 00:16:09.372 }, 00:16:09.372 "peer_address": { 00:16:09.372 "trtype": "TCP", 00:16:09.372 "adrfam": "IPv4", 00:16:09.372 "traddr": "10.0.0.1", 00:16:09.372 "trsvcid": "39004" 00:16:09.372 }, 00:16:09.372 "auth": { 00:16:09.372 "state": "completed", 00:16:09.372 "digest": "sha384", 00:16:09.372 "dhgroup": "null" 00:16:09.372 } 00:16:09.372 } 00:16:09.372 ]' 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.372 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.631 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.631 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.631 09:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.631 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:09.631 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.199 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.458 09:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.717 00:16:10.717 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.717 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.717 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.976 { 00:16:10.976 "cntlid": 57, 00:16:10.976 "qid": 0, 00:16:10.976 "state": "enabled", 00:16:10.976 "thread": "nvmf_tgt_poll_group_000", 00:16:10.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.976 "listen_address": { 00:16:10.976 "trtype": "TCP", 00:16:10.976 "adrfam": "IPv4", 00:16:10.976 "traddr": "10.0.0.2", 00:16:10.976 "trsvcid": "4420" 00:16:10.976 }, 00:16:10.976 "peer_address": { 00:16:10.976 "trtype": "TCP", 00:16:10.976 "adrfam": "IPv4", 00:16:10.976 "traddr": "10.0.0.1", 00:16:10.976 "trsvcid": "39042" 00:16:10.976 }, 00:16:10.976 "auth": { 00:16:10.976 "state": "completed", 00:16:10.976 "digest": "sha384", 00:16:10.976 "dhgroup": "ffdhe2048" 00:16:10.976 } 00:16:10.976 } 00:16:10.976 ]' 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.976 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.235 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:11.235 09:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:11.803 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.061 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.062 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.320 00:16:12.320 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.320 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.320 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.579 { 00:16:12.579 "cntlid": 59, 00:16:12.579 "qid": 0, 00:16:12.579 "state": "enabled", 00:16:12.579 "thread": "nvmf_tgt_poll_group_000", 00:16:12.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.579 "listen_address": { 00:16:12.579 "trtype": "TCP", 00:16:12.579 "adrfam": "IPv4", 00:16:12.579 "traddr": "10.0.0.2", 00:16:12.579 "trsvcid": "4420" 00:16:12.579 }, 00:16:12.579 "peer_address": { 00:16:12.579 "trtype": "TCP", 00:16:12.579 "adrfam": "IPv4", 00:16:12.579 "traddr": "10.0.0.1", 00:16:12.579 "trsvcid": "39074" 00:16:12.579 }, 00:16:12.579 "auth": { 00:16:12.579 "state": "completed", 00:16:12.579 "digest": "sha384", 00:16:12.579 "dhgroup": "ffdhe2048" 00:16:12.579 } 00:16:12.579 } 00:16:12.579 ]' 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.579 09:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.579 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.579 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.579 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.579 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.580 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.838 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:12.838 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.406 09:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.666 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.925 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.925 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.187 { 00:16:14.187 "cntlid": 61, 00:16:14.187 "qid": 0, 00:16:14.187 "state": "enabled", 00:16:14.187 "thread": "nvmf_tgt_poll_group_000", 00:16:14.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:14.187 "listen_address": { 00:16:14.187 "trtype": "TCP", 00:16:14.187 "adrfam": "IPv4", 00:16:14.187 "traddr": "10.0.0.2", 00:16:14.187 "trsvcid": "4420" 00:16:14.187 }, 00:16:14.187 "peer_address": { 00:16:14.187 "trtype": "TCP", 00:16:14.187 "adrfam": "IPv4", 00:16:14.187 "traddr": "10.0.0.1", 00:16:14.187 "trsvcid": "39106" 00:16:14.187 }, 00:16:14.187 "auth": { 00:16:14.187 "state": "completed", 00:16:14.187 "digest": "sha384", 00:16:14.187 "dhgroup": "ffdhe2048" 00:16:14.187 } 00:16:14.187 } 00:16:14.187 ]' 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.187 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.446 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:14.446 09:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:15.013 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.273 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.532 00:16:15.532 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.532 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.532 09:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.532 { 00:16:15.532 "cntlid": 63, 00:16:15.532 "qid": 0, 00:16:15.532 "state": "enabled", 00:16:15.532 "thread": "nvmf_tgt_poll_group_000", 00:16:15.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.532 "listen_address": { 00:16:15.532 "trtype": "TCP", 00:16:15.532 "adrfam": "IPv4", 00:16:15.532 "traddr": "10.0.0.2", 00:16:15.532 "trsvcid": "4420" 00:16:15.532 }, 00:16:15.532 "peer_address": { 00:16:15.532 "trtype": "TCP", 00:16:15.532 "adrfam": "IPv4", 00:16:15.532 "traddr": "10.0.0.1", 00:16:15.532 "trsvcid": "39132" 00:16:15.532 }, 00:16:15.532 "auth": { 00:16:15.532 "state": "completed", 00:16:15.532 "digest": "sha384", 00:16:15.532 "dhgroup": "ffdhe2048" 00:16:15.532 } 00:16:15.532 } 00:16:15.532 ]' 00:16:15.532 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.790 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.049 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:16.049 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:16.616 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.616 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.616 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.617 09:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.617 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.875 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.875 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.875 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.875 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.875 00:16:17.134 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.134 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.135 { 00:16:17.135 "cntlid": 65, 00:16:17.135 "qid": 0, 00:16:17.135 "state": "enabled", 00:16:17.135 "thread": "nvmf_tgt_poll_group_000", 00:16:17.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.135 "listen_address": { 00:16:17.135 "trtype": "TCP", 00:16:17.135 "adrfam": "IPv4", 00:16:17.135 "traddr": "10.0.0.2", 00:16:17.135 "trsvcid": "4420" 00:16:17.135 }, 00:16:17.135 "peer_address": { 00:16:17.135 "trtype": "TCP", 00:16:17.135 "adrfam": "IPv4", 00:16:17.135 "traddr": "10.0.0.1", 00:16:17.135 "trsvcid": "42740" 00:16:17.135 }, 00:16:17.135 "auth": { 00:16:17.135 "state": "completed", 00:16:17.135 "digest": "sha384", 00:16:17.135 "dhgroup": "ffdhe3072" 00:16:17.135 } 00:16:17.135 } 00:16:17.135 ]' 00:16:17.135 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.396 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.396 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.397 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.397 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.397 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.397 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.397 09:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.656 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:17.656 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.223 09:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.482 00:16:18.482 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.482 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.740 { 00:16:18.740 "cntlid": 67, 00:16:18.740 "qid": 0, 00:16:18.740 "state": "enabled", 00:16:18.740 "thread": "nvmf_tgt_poll_group_000", 00:16:18.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.740 "listen_address": { 00:16:18.740 "trtype": "TCP", 00:16:18.740 "adrfam": "IPv4", 00:16:18.740 "traddr": "10.0.0.2", 00:16:18.740 "trsvcid": "4420" 00:16:18.740 }, 00:16:18.740 "peer_address": { 00:16:18.740 "trtype": "TCP", 00:16:18.740 "adrfam": "IPv4", 00:16:18.740 "traddr": "10.0.0.1", 00:16:18.740 "trsvcid": "42762" 00:16:18.740 }, 00:16:18.740 "auth": { 00:16:18.740 "state": "completed", 00:16:18.740 "digest": "sha384", 00:16:18.740 "dhgroup": "ffdhe3072" 00:16:18.740 } 00:16:18.740 } 00:16:18.740 ]' 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.740 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.999 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.999 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.999 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.999 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.999 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.258 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:19.258 09:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.826 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.086 00:16:20.086 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.086 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.086 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.345 { 00:16:20.345 "cntlid": 69, 00:16:20.345 "qid": 0, 00:16:20.345 "state": "enabled", 00:16:20.345 "thread": "nvmf_tgt_poll_group_000", 00:16:20.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.345 "listen_address": { 00:16:20.345 "trtype": "TCP", 00:16:20.345 "adrfam": "IPv4", 00:16:20.345 "traddr": "10.0.0.2", 00:16:20.345 "trsvcid": "4420" 00:16:20.345 }, 00:16:20.345 "peer_address": { 00:16:20.345 "trtype": "TCP", 00:16:20.345 "adrfam": "IPv4", 00:16:20.345 "traddr": "10.0.0.1", 00:16:20.345 "trsvcid": "42790" 00:16:20.345 }, 00:16:20.345 "auth": { 00:16:20.345 "state": "completed", 00:16:20.345 "digest": "sha384", 00:16:20.345 "dhgroup": "ffdhe3072" 00:16:20.345 } 00:16:20.345 } 00:16:20.345 ]' 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.345 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.604 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.604 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.604 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.604 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.604 09:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.604 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:20.604 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.171 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.429 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.430 09:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.688 00:16:21.688 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.688 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.688 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.945 { 00:16:21.945 "cntlid": 71, 00:16:21.945 "qid": 0, 00:16:21.945 "state": "enabled", 00:16:21.945 "thread": "nvmf_tgt_poll_group_000", 00:16:21.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.945 "listen_address": { 00:16:21.945 "trtype": "TCP", 00:16:21.945 "adrfam": "IPv4", 00:16:21.945 "traddr": "10.0.0.2", 00:16:21.945 "trsvcid": "4420" 00:16:21.945 }, 00:16:21.945 "peer_address": { 00:16:21.945 "trtype": "TCP", 00:16:21.945 "adrfam": "IPv4", 00:16:21.945 "traddr": "10.0.0.1", 00:16:21.945 "trsvcid": "42824" 00:16:21.945 }, 00:16:21.945 "auth": { 00:16:21.945 "state": "completed", 00:16:21.945 "digest": "sha384", 00:16:21.945 "dhgroup": "ffdhe3072" 00:16:21.945 } 00:16:21.945 } 00:16:21.945 ]' 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.945 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.265 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.265 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.265 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.265 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:22.265 09:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.835 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.093 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.350 00:16:23.350 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.350 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.350 09:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.608 { 00:16:23.608 "cntlid": 73, 00:16:23.608 "qid": 0, 00:16:23.608 "state": "enabled", 00:16:23.608 "thread": "nvmf_tgt_poll_group_000", 00:16:23.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.608 "listen_address": { 00:16:23.608 "trtype": "TCP", 00:16:23.608 "adrfam": "IPv4", 00:16:23.608 "traddr": "10.0.0.2", 00:16:23.608 "trsvcid": "4420" 00:16:23.608 }, 00:16:23.608 "peer_address": { 00:16:23.608 "trtype": "TCP", 00:16:23.608 "adrfam": "IPv4", 00:16:23.608 "traddr": "10.0.0.1", 00:16:23.608 "trsvcid": "42836" 00:16:23.608 }, 00:16:23.608 "auth": { 00:16:23.608 "state": "completed", 00:16:23.608 "digest": "sha384", 00:16:23.608 "dhgroup": "ffdhe4096" 00:16:23.608 } 00:16:23.608 } 00:16:23.608 ]' 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.608 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.867 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:23.867 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:24.482 09:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.766 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.025 00:16:25.025 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.025 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.025 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.283 { 00:16:25.283 "cntlid": 75, 00:16:25.283 "qid": 0, 00:16:25.283 "state": "enabled", 00:16:25.283 "thread": "nvmf_tgt_poll_group_000", 00:16:25.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.283 "listen_address": { 00:16:25.283 "trtype": "TCP", 00:16:25.283 "adrfam": "IPv4", 00:16:25.283 "traddr": "10.0.0.2", 00:16:25.283 "trsvcid": "4420" 00:16:25.283 }, 00:16:25.283 "peer_address": { 00:16:25.283 "trtype": "TCP", 00:16:25.283 "adrfam": "IPv4", 00:16:25.283 "traddr": "10.0.0.1", 00:16:25.283 "trsvcid": "42874" 00:16:25.283 }, 00:16:25.283 "auth": { 00:16:25.283 "state": "completed", 00:16:25.283 "digest": "sha384", 00:16:25.283 "dhgroup": "ffdhe4096" 00:16:25.283 } 00:16:25.283 } 00:16:25.283 ]' 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.283 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.542 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:25.542 09:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:26.109 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.368 09:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.627 00:16:26.627 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.627 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.627 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.886 { 00:16:26.886 "cntlid": 77, 00:16:26.886 "qid": 0, 00:16:26.886 "state": "enabled", 00:16:26.886 "thread": "nvmf_tgt_poll_group_000", 00:16:26.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.886 "listen_address": { 00:16:26.886 "trtype": "TCP", 00:16:26.886 "adrfam": "IPv4", 00:16:26.886 "traddr": "10.0.0.2", 00:16:26.886 "trsvcid": "4420" 00:16:26.886 }, 00:16:26.886 "peer_address": { 00:16:26.886 "trtype": "TCP", 00:16:26.886 "adrfam": "IPv4", 00:16:26.886 "traddr": "10.0.0.1", 00:16:26.886 "trsvcid": "50214" 00:16:26.886 }, 00:16:26.886 "auth": { 00:16:26.886 "state": "completed", 00:16:26.886 "digest": "sha384", 00:16:26.886 "dhgroup": "ffdhe4096" 00:16:26.886 } 00:16:26.886 } 00:16:26.886 ]' 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.886 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.145 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:27.145 09:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:27.713 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.973 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.231 00:16:28.231 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.231 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.231 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.490 { 00:16:28.490 "cntlid": 79, 00:16:28.490 "qid": 0, 00:16:28.490 "state": "enabled", 00:16:28.490 "thread": "nvmf_tgt_poll_group_000", 00:16:28.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.490 "listen_address": { 00:16:28.490 "trtype": "TCP", 00:16:28.490 "adrfam": "IPv4", 00:16:28.490 "traddr": "10.0.0.2", 00:16:28.490 "trsvcid": "4420" 00:16:28.490 }, 00:16:28.490 "peer_address": { 00:16:28.490 "trtype": "TCP", 00:16:28.490 "adrfam": "IPv4", 00:16:28.490 "traddr": "10.0.0.1", 00:16:28.490 "trsvcid": "50252" 00:16:28.490 }, 00:16:28.490 "auth": { 00:16:28.490 "state": "completed", 00:16:28.490 "digest": "sha384", 00:16:28.490 "dhgroup": "ffdhe4096" 00:16:28.490 } 00:16:28.490 } 00:16:28.490 ]' 00:16:28.490 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.491 09:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.750 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:28.750 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.318 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.577 09:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.836 00:16:29.836 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.836 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.836 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.095 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.095 { 00:16:30.095 "cntlid": 81, 00:16:30.095 "qid": 0, 00:16:30.095 "state": "enabled", 00:16:30.095 "thread": "nvmf_tgt_poll_group_000", 00:16:30.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.096 "listen_address": { 00:16:30.096 "trtype": "TCP", 00:16:30.096 "adrfam": "IPv4", 00:16:30.096 "traddr": "10.0.0.2", 00:16:30.096 "trsvcid": "4420" 00:16:30.096 }, 00:16:30.096 "peer_address": { 00:16:30.096 "trtype": "TCP", 00:16:30.096 "adrfam": "IPv4", 00:16:30.096 "traddr": "10.0.0.1", 00:16:30.096 "trsvcid": "50272" 00:16:30.096 }, 00:16:30.096 "auth": { 00:16:30.096 "state": "completed", 00:16:30.096 "digest": "sha384", 00:16:30.096 "dhgroup": "ffdhe6144" 00:16:30.096 } 00:16:30.096 } 00:16:30.096 ]' 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.096 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.355 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:30.355 09:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:30.923 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.182 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.183 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.442 00:16:31.442 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.442 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.442 09:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.701 { 00:16:31.701 "cntlid": 83, 00:16:31.701 "qid": 0, 00:16:31.701 "state": "enabled", 00:16:31.701 "thread": "nvmf_tgt_poll_group_000", 00:16:31.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.701 "listen_address": { 00:16:31.701 "trtype": "TCP", 00:16:31.701 "adrfam": "IPv4", 00:16:31.701 "traddr": "10.0.0.2", 00:16:31.701 "trsvcid": "4420" 00:16:31.701 }, 00:16:31.701 "peer_address": { 00:16:31.701 "trtype": "TCP", 00:16:31.701 "adrfam": "IPv4", 00:16:31.701 "traddr": "10.0.0.1", 00:16:31.701 "trsvcid": "50292" 00:16:31.701 }, 00:16:31.701 "auth": { 00:16:31.701 "state": "completed", 00:16:31.701 "digest": "sha384", 00:16:31.701 "dhgroup": "ffdhe6144" 00:16:31.701 } 00:16:31.701 } 00:16:31.701 ]' 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.701 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.959 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:31.959 09:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:32.527 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.785 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.043 00:16:33.043 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.043 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.044 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.302 { 00:16:33.302 "cntlid": 85, 00:16:33.302 "qid": 0, 00:16:33.302 "state": "enabled", 00:16:33.302 "thread": "nvmf_tgt_poll_group_000", 00:16:33.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.302 "listen_address": { 00:16:33.302 "trtype": "TCP", 00:16:33.302 "adrfam": "IPv4", 00:16:33.302 "traddr": "10.0.0.2", 00:16:33.302 "trsvcid": "4420" 00:16:33.302 }, 00:16:33.302 "peer_address": { 00:16:33.302 "trtype": "TCP", 00:16:33.302 "adrfam": "IPv4", 00:16:33.302 "traddr": "10.0.0.1", 00:16:33.302 "trsvcid": "50314" 00:16:33.302 }, 00:16:33.302 "auth": { 00:16:33.302 "state": "completed", 00:16:33.302 "digest": "sha384", 00:16:33.302 "dhgroup": "ffdhe6144" 00:16:33.302 } 00:16:33.302 } 00:16:33.302 ]' 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.302 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.561 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.561 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.561 09:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.561 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:33.561 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:34.129 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.388 09:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.647 00:16:34.647 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.647 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.647 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.906 { 00:16:34.906 "cntlid": 87, 00:16:34.906 "qid": 0, 00:16:34.906 "state": "enabled", 00:16:34.906 "thread": "nvmf_tgt_poll_group_000", 00:16:34.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.906 "listen_address": { 00:16:34.906 "trtype": "TCP", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "10.0.0.2", 00:16:34.906 "trsvcid": "4420" 00:16:34.906 }, 00:16:34.906 "peer_address": { 00:16:34.906 "trtype": "TCP", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "10.0.0.1", 00:16:34.906 "trsvcid": "50344" 00:16:34.906 }, 00:16:34.906 "auth": { 00:16:34.906 "state": "completed", 00:16:34.906 "digest": "sha384", 00:16:34.906 "dhgroup": "ffdhe6144" 00:16:34.906 } 00:16:34.906 } 00:16:34.906 ]' 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.906 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.165 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.165 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.165 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.165 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.165 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.424 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:35.424 09:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.992 09:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.560 00:16:36.560 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.560 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.560 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.819 { 00:16:36.819 "cntlid": 89, 00:16:36.819 "qid": 0, 00:16:36.819 "state": "enabled", 00:16:36.819 "thread": "nvmf_tgt_poll_group_000", 00:16:36.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.819 "listen_address": { 00:16:36.819 "trtype": "TCP", 00:16:36.819 "adrfam": "IPv4", 00:16:36.819 "traddr": "10.0.0.2", 00:16:36.819 "trsvcid": "4420" 00:16:36.819 }, 00:16:36.819 "peer_address": { 00:16:36.819 "trtype": "TCP", 00:16:36.819 "adrfam": "IPv4", 00:16:36.819 "traddr": "10.0.0.1", 00:16:36.819 "trsvcid": "58898" 00:16:36.819 }, 00:16:36.819 "auth": { 00:16:36.819 "state": "completed", 00:16:36.819 "digest": "sha384", 00:16:36.819 "dhgroup": "ffdhe8192" 00:16:36.819 } 00:16:36.819 } 00:16:36.819 ]' 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.819 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.078 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:37.078 09:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:37.646 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.905 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.473 00:16:38.473 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.473 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.473 09:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.473 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.473 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.473 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.473 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.732 { 00:16:38.732 "cntlid": 91, 00:16:38.732 "qid": 0, 00:16:38.732 "state": "enabled", 00:16:38.732 "thread": "nvmf_tgt_poll_group_000", 00:16:38.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.732 "listen_address": { 00:16:38.732 "trtype": "TCP", 00:16:38.732 "adrfam": "IPv4", 00:16:38.732 "traddr": "10.0.0.2", 00:16:38.732 "trsvcid": "4420" 00:16:38.732 }, 00:16:38.732 "peer_address": { 00:16:38.732 "trtype": "TCP", 00:16:38.732 "adrfam": "IPv4", 00:16:38.732 "traddr": "10.0.0.1", 00:16:38.732 "trsvcid": "58918" 00:16:38.732 }, 00:16:38.732 "auth": { 00:16:38.732 "state": "completed", 00:16:38.732 "digest": "sha384", 00:16:38.732 "dhgroup": "ffdhe8192" 00:16:38.732 } 00:16:38.732 } 00:16:38.732 ]' 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.732 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.733 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.733 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.733 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.992 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:38.992 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.559 09:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.818 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:39.818 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.818 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.818 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.818 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.819 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.078 00:16:40.078 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.078 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.078 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.337 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.337 { 00:16:40.337 "cntlid": 93, 00:16:40.337 "qid": 0, 00:16:40.337 "state": "enabled", 00:16:40.337 "thread": "nvmf_tgt_poll_group_000", 00:16:40.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.337 "listen_address": { 00:16:40.337 "trtype": "TCP", 00:16:40.337 "adrfam": "IPv4", 00:16:40.337 "traddr": "10.0.0.2", 00:16:40.337 "trsvcid": "4420" 00:16:40.337 }, 00:16:40.337 "peer_address": { 00:16:40.337 "trtype": "TCP", 00:16:40.337 "adrfam": "IPv4", 00:16:40.337 "traddr": "10.0.0.1", 00:16:40.337 "trsvcid": "58940" 00:16:40.338 }, 00:16:40.338 "auth": { 00:16:40.338 "state": "completed", 00:16:40.338 "digest": "sha384", 00:16:40.338 "dhgroup": "ffdhe8192" 00:16:40.338 } 00:16:40.338 } 00:16:40.338 ]' 00:16:40.338 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.338 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.338 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.596 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.596 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.596 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.596 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.596 09:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.855 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:40.855 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.423 09:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.991 00:16:41.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.991 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.250 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.250 { 00:16:42.250 "cntlid": 95, 00:16:42.250 "qid": 0, 00:16:42.250 "state": "enabled", 00:16:42.250 "thread": "nvmf_tgt_poll_group_000", 00:16:42.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.250 "listen_address": { 00:16:42.250 "trtype": "TCP", 00:16:42.250 "adrfam": "IPv4", 00:16:42.251 "traddr": "10.0.0.2", 00:16:42.251 "trsvcid": "4420" 00:16:42.251 }, 00:16:42.251 "peer_address": { 00:16:42.251 "trtype": "TCP", 00:16:42.251 "adrfam": "IPv4", 00:16:42.251 "traddr": "10.0.0.1", 00:16:42.251 "trsvcid": "58966" 00:16:42.251 }, 00:16:42.251 "auth": { 00:16:42.251 "state": "completed", 00:16:42.251 "digest": "sha384", 00:16:42.251 "dhgroup": "ffdhe8192" 00:16:42.251 } 00:16:42.251 } 00:16:42.251 ]' 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.251 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.509 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:42.509 09:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.077 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.337 09:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.596 00:16:43.596 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.596 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.596 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.855 { 00:16:43.855 "cntlid": 97, 00:16:43.855 "qid": 0, 00:16:43.855 "state": "enabled", 00:16:43.855 "thread": "nvmf_tgt_poll_group_000", 00:16:43.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.855 "listen_address": { 00:16:43.855 "trtype": "TCP", 00:16:43.855 "adrfam": "IPv4", 00:16:43.855 "traddr": "10.0.0.2", 00:16:43.855 "trsvcid": "4420" 00:16:43.855 }, 00:16:43.855 "peer_address": { 00:16:43.855 "trtype": "TCP", 00:16:43.855 "adrfam": "IPv4", 00:16:43.855 "traddr": "10.0.0.1", 00:16:43.855 "trsvcid": "59006" 00:16:43.855 }, 00:16:43.855 "auth": { 00:16:43.855 "state": "completed", 00:16:43.855 "digest": "sha512", 00:16:43.855 "dhgroup": "null" 00:16:43.855 } 00:16:43.855 } 00:16:43.855 ]' 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.855 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.114 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:44.114 09:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.707 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.965 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.966 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.225 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.225 { 00:16:45.225 "cntlid": 99, 00:16:45.225 "qid": 0, 00:16:45.225 "state": "enabled", 00:16:45.225 "thread": "nvmf_tgt_poll_group_000", 00:16:45.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.225 "listen_address": { 00:16:45.225 "trtype": "TCP", 00:16:45.225 "adrfam": "IPv4", 00:16:45.225 "traddr": "10.0.0.2", 00:16:45.225 "trsvcid": "4420" 00:16:45.225 }, 00:16:45.225 "peer_address": { 00:16:45.225 "trtype": "TCP", 00:16:45.225 "adrfam": "IPv4", 00:16:45.225 "traddr": "10.0.0.1", 00:16:45.225 "trsvcid": "59018" 00:16:45.225 }, 00:16:45.225 "auth": { 00:16:45.225 "state": "completed", 00:16:45.225 "digest": "sha512", 00:16:45.225 "dhgroup": "null" 00:16:45.225 } 00:16:45.225 } 00:16:45.225 ]' 00:16:45.225 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.483 09:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.742 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:45.742 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.310 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.569 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.569 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.569 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.569 09:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.569 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.828 { 00:16:46.828 "cntlid": 101, 00:16:46.828 "qid": 0, 00:16:46.828 "state": "enabled", 00:16:46.828 "thread": "nvmf_tgt_poll_group_000", 00:16:46.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.828 "listen_address": { 00:16:46.828 "trtype": "TCP", 00:16:46.828 "adrfam": "IPv4", 00:16:46.828 "traddr": "10.0.0.2", 00:16:46.828 "trsvcid": "4420" 00:16:46.828 }, 00:16:46.828 "peer_address": { 00:16:46.828 "trtype": "TCP", 00:16:46.828 "adrfam": "IPv4", 00:16:46.828 "traddr": "10.0.0.1", 00:16:46.828 "trsvcid": "53580" 00:16:46.828 }, 00:16:46.828 "auth": { 00:16:46.828 "state": "completed", 00:16:46.828 "digest": "sha512", 00:16:46.828 "dhgroup": "null" 00:16:46.828 } 00:16:46.828 } 00:16:46.828 ]' 00:16:46.828 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.087 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.088 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.346 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:47.346 09:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.913 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.172 00:16:48.172 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.172 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.172 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.431 { 00:16:48.431 "cntlid": 103, 00:16:48.431 "qid": 0, 00:16:48.431 "state": "enabled", 00:16:48.431 "thread": "nvmf_tgt_poll_group_000", 00:16:48.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.431 "listen_address": { 00:16:48.431 "trtype": "TCP", 00:16:48.431 "adrfam": "IPv4", 00:16:48.431 "traddr": "10.0.0.2", 00:16:48.431 "trsvcid": "4420" 00:16:48.431 }, 00:16:48.431 "peer_address": { 00:16:48.431 "trtype": "TCP", 00:16:48.431 "adrfam": "IPv4", 00:16:48.431 "traddr": "10.0.0.1", 00:16:48.431 "trsvcid": "53614" 00:16:48.431 }, 00:16:48.431 "auth": { 00:16:48.431 "state": "completed", 00:16:48.431 "digest": "sha512", 00:16:48.431 "dhgroup": "null" 00:16:48.431 } 00:16:48.431 } 00:16:48.431 ]' 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:48.431 09:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.431 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.431 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.431 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.689 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:48.689 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.257 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.258 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.258 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.258 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.516 09:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.775 00:16:49.775 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.775 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.775 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.034 { 00:16:50.034 "cntlid": 105, 00:16:50.034 "qid": 0, 00:16:50.034 "state": "enabled", 00:16:50.034 "thread": "nvmf_tgt_poll_group_000", 00:16:50.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.034 "listen_address": { 00:16:50.034 "trtype": "TCP", 00:16:50.034 "adrfam": "IPv4", 00:16:50.034 "traddr": "10.0.0.2", 00:16:50.034 "trsvcid": "4420" 00:16:50.034 }, 00:16:50.034 "peer_address": { 00:16:50.034 "trtype": "TCP", 00:16:50.034 "adrfam": "IPv4", 00:16:50.034 "traddr": "10.0.0.1", 00:16:50.034 "trsvcid": "53640" 00:16:50.034 }, 00:16:50.034 "auth": { 00:16:50.034 "state": "completed", 00:16:50.034 "digest": "sha512", 00:16:50.034 "dhgroup": "ffdhe2048" 00:16:50.034 } 00:16:50.034 } 00:16:50.034 ]' 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.034 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.293 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:50.293 09:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.862 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.121 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.380 00:16:51.380 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.380 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.380 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.639 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.639 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.639 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.639 09:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.639 { 00:16:51.639 "cntlid": 107, 00:16:51.639 "qid": 0, 00:16:51.639 "state": "enabled", 00:16:51.639 "thread": "nvmf_tgt_poll_group_000", 00:16:51.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.639 "listen_address": { 00:16:51.639 "trtype": "TCP", 00:16:51.639 "adrfam": "IPv4", 00:16:51.639 "traddr": "10.0.0.2", 00:16:51.639 "trsvcid": "4420" 00:16:51.639 }, 00:16:51.639 "peer_address": { 00:16:51.639 "trtype": "TCP", 00:16:51.639 "adrfam": "IPv4", 00:16:51.639 "traddr": "10.0.0.1", 00:16:51.639 "trsvcid": "53660" 00:16:51.639 }, 00:16:51.639 "auth": { 00:16:51.639 "state": "completed", 00:16:51.639 "digest": "sha512", 00:16:51.639 "dhgroup": "ffdhe2048" 00:16:51.639 } 00:16:51.639 } 00:16:51.639 ]' 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.639 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.640 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.898 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:51.898 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:52.466 09:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.725 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.984 00:16:52.984 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.984 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.984 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.242 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.242 { 00:16:53.242 "cntlid": 109, 00:16:53.242 "qid": 0, 00:16:53.242 "state": "enabled", 00:16:53.242 "thread": "nvmf_tgt_poll_group_000", 00:16:53.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.242 "listen_address": { 00:16:53.242 "trtype": "TCP", 00:16:53.242 "adrfam": "IPv4", 00:16:53.242 "traddr": "10.0.0.2", 00:16:53.242 "trsvcid": "4420" 00:16:53.242 }, 00:16:53.242 "peer_address": { 00:16:53.242 "trtype": "TCP", 00:16:53.242 "adrfam": "IPv4", 00:16:53.242 "traddr": "10.0.0.1", 00:16:53.242 "trsvcid": "53696" 00:16:53.242 }, 00:16:53.242 "auth": { 00:16:53.242 "state": "completed", 00:16:53.242 "digest": "sha512", 00:16:53.242 "dhgroup": "ffdhe2048" 00:16:53.242 } 00:16:53.242 } 00:16:53.242 ]' 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.243 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.501 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:53.501 09:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.068 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.327 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.592 00:16:54.592 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.592 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.592 09:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.592 { 00:16:54.592 "cntlid": 111, 00:16:54.592 "qid": 0, 00:16:54.592 "state": "enabled", 00:16:54.592 "thread": "nvmf_tgt_poll_group_000", 00:16:54.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.592 "listen_address": { 00:16:54.592 "trtype": "TCP", 00:16:54.592 "adrfam": "IPv4", 00:16:54.592 "traddr": "10.0.0.2", 00:16:54.592 "trsvcid": "4420" 00:16:54.592 }, 00:16:54.592 "peer_address": { 00:16:54.592 "trtype": "TCP", 00:16:54.592 "adrfam": "IPv4", 00:16:54.592 "traddr": "10.0.0.1", 00:16:54.592 "trsvcid": "53722" 00:16:54.592 }, 00:16:54.592 "auth": { 00:16:54.592 "state": "completed", 00:16:54.592 "digest": "sha512", 00:16:54.592 "dhgroup": "ffdhe2048" 00:16:54.592 } 00:16:54.592 } 00:16:54.592 ]' 00:16:54.592 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.853 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.112 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:55.112 09:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.679 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.680 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.680 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.680 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.680 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.939 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.939 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.939 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.939 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.939 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.198 { 00:16:56.198 "cntlid": 113, 00:16:56.198 "qid": 0, 00:16:56.198 "state": "enabled", 00:16:56.198 "thread": "nvmf_tgt_poll_group_000", 00:16:56.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.198 "listen_address": { 00:16:56.198 "trtype": "TCP", 00:16:56.198 "adrfam": "IPv4", 00:16:56.198 "traddr": "10.0.0.2", 00:16:56.198 "trsvcid": "4420" 00:16:56.198 }, 00:16:56.198 "peer_address": { 00:16:56.198 "trtype": "TCP", 00:16:56.198 "adrfam": "IPv4", 00:16:56.198 "traddr": "10.0.0.1", 00:16:56.198 "trsvcid": "53750" 00:16:56.198 }, 00:16:56.198 "auth": { 00:16:56.198 "state": "completed", 00:16:56.198 "digest": "sha512", 00:16:56.198 "dhgroup": "ffdhe3072" 00:16:56.198 } 00:16:56.198 } 00:16:56.198 ]' 00:16:56.198 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.458 09:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.718 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:56.718 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.287 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.546 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.546 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.546 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.546 09:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.546 00:16:57.805 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.805 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.805 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.805 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.806 { 00:16:57.806 "cntlid": 115, 00:16:57.806 "qid": 0, 00:16:57.806 "state": "enabled", 00:16:57.806 "thread": "nvmf_tgt_poll_group_000", 00:16:57.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.806 "listen_address": { 00:16:57.806 "trtype": "TCP", 00:16:57.806 "adrfam": "IPv4", 00:16:57.806 "traddr": "10.0.0.2", 00:16:57.806 "trsvcid": "4420" 00:16:57.806 }, 00:16:57.806 "peer_address": { 00:16:57.806 "trtype": "TCP", 00:16:57.806 "adrfam": "IPv4", 00:16:57.806 "traddr": "10.0.0.1", 00:16:57.806 "trsvcid": "51246" 00:16:57.806 }, 00:16:57.806 "auth": { 00:16:57.806 "state": "completed", 00:16:57.806 "digest": "sha512", 00:16:57.806 "dhgroup": "ffdhe3072" 00:16:57.806 } 00:16:57.806 } 00:16:57.806 ]' 00:16:57.806 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.065 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.324 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:58.324 09:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.892 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.151 00:16:59.151 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.151 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.151 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.410 { 00:16:59.410 "cntlid": 117, 00:16:59.410 "qid": 0, 00:16:59.410 "state": "enabled", 00:16:59.410 "thread": "nvmf_tgt_poll_group_000", 00:16:59.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.410 "listen_address": { 00:16:59.410 "trtype": "TCP", 00:16:59.410 "adrfam": "IPv4", 00:16:59.410 "traddr": "10.0.0.2", 00:16:59.410 "trsvcid": "4420" 00:16:59.410 }, 00:16:59.410 "peer_address": { 00:16:59.410 "trtype": "TCP", 00:16:59.410 "adrfam": "IPv4", 00:16:59.410 "traddr": "10.0.0.1", 00:16:59.410 "trsvcid": "51270" 00:16:59.410 }, 00:16:59.410 "auth": { 00:16:59.410 "state": "completed", 00:16:59.410 "digest": "sha512", 00:16:59.410 "dhgroup": "ffdhe3072" 00:16:59.410 } 00:16:59.410 } 00:16:59.410 ]' 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.410 09:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.669 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.669 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.669 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.669 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.669 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.928 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:16:59.928 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.496 09:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.496 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.755 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.755 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.755 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.755 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.755 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.016 { 00:17:01.016 "cntlid": 119, 00:17:01.016 "qid": 0, 00:17:01.016 "state": "enabled", 00:17:01.016 "thread": "nvmf_tgt_poll_group_000", 00:17:01.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.016 "listen_address": { 00:17:01.016 "trtype": "TCP", 00:17:01.016 "adrfam": "IPv4", 00:17:01.016 "traddr": "10.0.0.2", 00:17:01.016 "trsvcid": "4420" 00:17:01.016 }, 00:17:01.016 "peer_address": { 00:17:01.016 "trtype": "TCP", 00:17:01.016 "adrfam": "IPv4", 00:17:01.016 "traddr": "10.0.0.1", 00:17:01.016 "trsvcid": "51284" 00:17:01.016 }, 00:17:01.016 "auth": { 00:17:01.016 "state": "completed", 00:17:01.016 "digest": "sha512", 00:17:01.016 "dhgroup": "ffdhe3072" 00:17:01.016 } 00:17:01.016 } 00:17:01.016 ]' 00:17:01.016 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.317 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.644 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:01.644 09:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.908 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.167 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.426 00:17:02.426 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.426 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.426 09:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.685 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.685 { 00:17:02.685 "cntlid": 121, 00:17:02.685 "qid": 0, 00:17:02.686 "state": "enabled", 00:17:02.686 "thread": "nvmf_tgt_poll_group_000", 00:17:02.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.686 "listen_address": { 00:17:02.686 "trtype": "TCP", 00:17:02.686 "adrfam": "IPv4", 00:17:02.686 "traddr": "10.0.0.2", 00:17:02.686 "trsvcid": "4420" 00:17:02.686 }, 00:17:02.686 "peer_address": { 00:17:02.686 "trtype": "TCP", 00:17:02.686 "adrfam": "IPv4", 00:17:02.686 "traddr": "10.0.0.1", 00:17:02.686 "trsvcid": "51298" 00:17:02.686 }, 00:17:02.686 "auth": { 00:17:02.686 "state": "completed", 00:17:02.686 "digest": "sha512", 00:17:02.686 "dhgroup": "ffdhe4096" 00:17:02.686 } 00:17:02.686 } 00:17:02.686 ]' 00:17:02.686 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.686 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.686 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.686 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.686 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.945 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.945 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.945 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.945 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:02.945 09:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.514 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.774 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.033 00:17:04.033 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.033 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.033 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.292 { 00:17:04.292 "cntlid": 123, 00:17:04.292 "qid": 0, 00:17:04.292 "state": "enabled", 00:17:04.292 "thread": "nvmf_tgt_poll_group_000", 00:17:04.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.292 "listen_address": { 00:17:04.292 "trtype": "TCP", 00:17:04.292 "adrfam": "IPv4", 00:17:04.292 "traddr": "10.0.0.2", 00:17:04.292 "trsvcid": "4420" 00:17:04.292 }, 00:17:04.292 "peer_address": { 00:17:04.292 "trtype": "TCP", 00:17:04.292 "adrfam": "IPv4", 00:17:04.292 "traddr": "10.0.0.1", 00:17:04.292 "trsvcid": "51316" 00:17:04.292 }, 00:17:04.292 "auth": { 00:17:04.292 "state": "completed", 00:17:04.292 "digest": "sha512", 00:17:04.292 "dhgroup": "ffdhe4096" 00:17:04.292 } 00:17:04.292 } 00:17:04.292 ]' 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.292 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.551 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.551 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.551 09:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.551 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:04.551 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:05.117 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.118 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.376 09:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.635 00:17:05.635 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.635 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.635 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.892 { 00:17:05.892 "cntlid": 125, 00:17:05.892 "qid": 0, 00:17:05.892 "state": "enabled", 00:17:05.892 "thread": "nvmf_tgt_poll_group_000", 00:17:05.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.892 "listen_address": { 00:17:05.892 "trtype": "TCP", 00:17:05.892 "adrfam": "IPv4", 00:17:05.892 "traddr": "10.0.0.2", 00:17:05.892 "trsvcid": "4420" 00:17:05.892 }, 00:17:05.892 "peer_address": { 00:17:05.892 "trtype": "TCP", 00:17:05.892 "adrfam": "IPv4", 00:17:05.892 "traddr": "10.0.0.1", 00:17:05.892 "trsvcid": "51344" 00:17:05.892 }, 00:17:05.892 "auth": { 00:17:05.892 "state": "completed", 00:17:05.892 "digest": "sha512", 00:17:05.892 "dhgroup": "ffdhe4096" 00:17:05.892 } 00:17:05.892 } 00:17:05.892 ]' 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.892 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.150 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:06.150 09:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.718 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.977 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.237 00:17:07.237 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.237 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.237 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.496 { 00:17:07.496 "cntlid": 127, 00:17:07.496 "qid": 0, 00:17:07.496 "state": "enabled", 00:17:07.496 "thread": "nvmf_tgt_poll_group_000", 00:17:07.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.496 "listen_address": { 00:17:07.496 "trtype": "TCP", 00:17:07.496 "adrfam": "IPv4", 00:17:07.496 "traddr": "10.0.0.2", 00:17:07.496 "trsvcid": "4420" 00:17:07.496 }, 00:17:07.496 "peer_address": { 00:17:07.496 "trtype": "TCP", 00:17:07.496 "adrfam": "IPv4", 00:17:07.496 "traddr": "10.0.0.1", 00:17:07.496 "trsvcid": "42878" 00:17:07.496 }, 00:17:07.496 "auth": { 00:17:07.496 "state": "completed", 00:17:07.496 "digest": "sha512", 00:17:07.496 "dhgroup": "ffdhe4096" 00:17:07.496 } 00:17:07.496 } 00:17:07.496 ]' 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.496 09:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.496 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.496 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.496 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.496 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.496 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.755 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:07.755 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.323 09:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.582 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.841 00:17:08.841 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.841 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.841 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.099 { 00:17:09.099 "cntlid": 129, 00:17:09.099 "qid": 0, 00:17:09.099 "state": "enabled", 00:17:09.099 "thread": "nvmf_tgt_poll_group_000", 00:17:09.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.099 "listen_address": { 00:17:09.099 "trtype": "TCP", 00:17:09.099 "adrfam": "IPv4", 00:17:09.099 "traddr": "10.0.0.2", 00:17:09.099 "trsvcid": "4420" 00:17:09.099 }, 00:17:09.099 "peer_address": { 00:17:09.099 "trtype": "TCP", 00:17:09.099 "adrfam": "IPv4", 00:17:09.099 "traddr": "10.0.0.1", 00:17:09.099 "trsvcid": "42898" 00:17:09.099 }, 00:17:09.099 "auth": { 00:17:09.099 "state": "completed", 00:17:09.099 "digest": "sha512", 00:17:09.099 "dhgroup": "ffdhe6144" 00:17:09.099 } 00:17:09.099 } 00:17:09.099 ]' 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.099 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.358 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.358 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.358 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.358 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:09.358 09:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.924 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.182 09:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.748 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.748 { 00:17:10.748 "cntlid": 131, 00:17:10.748 "qid": 0, 00:17:10.748 "state": "enabled", 00:17:10.748 "thread": "nvmf_tgt_poll_group_000", 00:17:10.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.748 "listen_address": { 00:17:10.748 "trtype": "TCP", 00:17:10.748 "adrfam": "IPv4", 00:17:10.748 "traddr": "10.0.0.2", 00:17:10.748 "trsvcid": "4420" 00:17:10.748 }, 00:17:10.748 "peer_address": { 00:17:10.748 "trtype": "TCP", 00:17:10.748 "adrfam": "IPv4", 00:17:10.748 "traddr": "10.0.0.1", 00:17:10.748 "trsvcid": "42930" 00:17:10.748 }, 00:17:10.748 "auth": { 00:17:10.748 "state": "completed", 00:17:10.748 "digest": "sha512", 00:17:10.748 "dhgroup": "ffdhe6144" 00:17:10.748 } 00:17:10.748 } 00:17:10.748 ]' 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.748 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:11.007 09:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.573 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.831 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.399 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.399 { 00:17:12.399 "cntlid": 133, 00:17:12.399 "qid": 0, 00:17:12.399 "state": "enabled", 00:17:12.399 "thread": "nvmf_tgt_poll_group_000", 00:17:12.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.399 "listen_address": { 00:17:12.399 "trtype": "TCP", 00:17:12.399 "adrfam": "IPv4", 00:17:12.399 "traddr": "10.0.0.2", 00:17:12.399 "trsvcid": "4420" 00:17:12.399 }, 00:17:12.399 "peer_address": { 00:17:12.399 "trtype": "TCP", 00:17:12.399 "adrfam": "IPv4", 00:17:12.399 "traddr": "10.0.0.1", 00:17:12.399 "trsvcid": "42956" 00:17:12.399 }, 00:17:12.399 "auth": { 00:17:12.399 "state": "completed", 00:17:12.399 "digest": "sha512", 00:17:12.399 "dhgroup": "ffdhe6144" 00:17:12.399 } 00:17:12.399 } 00:17:12.399 ]' 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.399 09:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.658 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.658 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.658 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.658 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:12.658 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.226 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.485 09:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.052 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.052 { 00:17:14.052 "cntlid": 135, 00:17:14.052 "qid": 0, 00:17:14.052 "state": "enabled", 00:17:14.052 "thread": "nvmf_tgt_poll_group_000", 00:17:14.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.052 "listen_address": { 00:17:14.052 "trtype": "TCP", 00:17:14.052 "adrfam": "IPv4", 00:17:14.052 "traddr": "10.0.0.2", 00:17:14.052 "trsvcid": "4420" 00:17:14.052 }, 00:17:14.052 "peer_address": { 00:17:14.052 "trtype": "TCP", 00:17:14.052 "adrfam": "IPv4", 00:17:14.052 "traddr": "10.0.0.1", 00:17:14.052 "trsvcid": "42988" 00:17:14.052 }, 00:17:14.052 "auth": { 00:17:14.052 "state": "completed", 00:17:14.052 "digest": "sha512", 00:17:14.052 "dhgroup": "ffdhe6144" 00:17:14.052 } 00:17:14.052 } 00:17:14.052 ]' 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.052 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:14.311 09:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.878 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.138 09:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.705 00:17:15.705 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.705 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.705 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.964 { 00:17:15.964 "cntlid": 137, 00:17:15.964 "qid": 0, 00:17:15.964 "state": "enabled", 00:17:15.964 "thread": "nvmf_tgt_poll_group_000", 00:17:15.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.964 "listen_address": { 00:17:15.964 "trtype": "TCP", 00:17:15.964 "adrfam": "IPv4", 00:17:15.964 "traddr": "10.0.0.2", 00:17:15.964 "trsvcid": "4420" 00:17:15.964 }, 00:17:15.964 "peer_address": { 00:17:15.964 "trtype": "TCP", 00:17:15.964 "adrfam": "IPv4", 00:17:15.964 "traddr": "10.0.0.1", 00:17:15.964 "trsvcid": "43022" 00:17:15.964 }, 00:17:15.964 "auth": { 00:17:15.964 "state": "completed", 00:17:15.964 "digest": "sha512", 00:17:15.964 "dhgroup": "ffdhe8192" 00:17:15.964 } 00:17:15.964 } 00:17:15.964 ]' 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.964 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.223 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:16.223 09:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.790 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.048 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:17.048 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.048 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.048 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.049 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.616 00:17:17.616 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.616 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.616 09:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.616 { 00:17:17.616 "cntlid": 139, 00:17:17.616 "qid": 0, 00:17:17.616 "state": "enabled", 00:17:17.616 "thread": "nvmf_tgt_poll_group_000", 00:17:17.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.616 "listen_address": { 00:17:17.616 "trtype": "TCP", 00:17:17.616 "adrfam": "IPv4", 00:17:17.616 "traddr": "10.0.0.2", 00:17:17.616 "trsvcid": "4420" 00:17:17.616 }, 00:17:17.616 "peer_address": { 00:17:17.616 "trtype": "TCP", 00:17:17.616 "adrfam": "IPv4", 00:17:17.616 "traddr": "10.0.0.1", 00:17:17.616 "trsvcid": "56846" 00:17:17.616 }, 00:17:17.616 "auth": { 00:17:17.616 "state": "completed", 00:17:17.616 "digest": "sha512", 00:17:17.616 "dhgroup": "ffdhe8192" 00:17:17.616 } 00:17:17.616 } 00:17:17.616 ]' 00:17:17.616 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.875 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.134 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:18.134 09:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: --dhchap-ctrl-secret DHHC-1:02:YjE2ZGM1MGE5MjdiMjYyN2E2NGI5Yzc1MjcxNWI3MjE2YTViOTRkODNjYTFmYzAyNsnV+g==: 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.702 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.961 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.961 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.961 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.961 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.219 00:17:19.219 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.219 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.219 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.478 { 00:17:19.478 "cntlid": 141, 00:17:19.478 "qid": 0, 00:17:19.478 "state": "enabled", 00:17:19.478 "thread": "nvmf_tgt_poll_group_000", 00:17:19.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.478 "listen_address": { 00:17:19.478 "trtype": "TCP", 00:17:19.478 "adrfam": "IPv4", 00:17:19.478 "traddr": "10.0.0.2", 00:17:19.478 "trsvcid": "4420" 00:17:19.478 }, 00:17:19.478 "peer_address": { 00:17:19.478 "trtype": "TCP", 00:17:19.478 "adrfam": "IPv4", 00:17:19.478 "traddr": "10.0.0.1", 00:17:19.478 "trsvcid": "56870" 00:17:19.478 }, 00:17:19.478 "auth": { 00:17:19.478 "state": "completed", 00:17:19.478 "digest": "sha512", 00:17:19.478 "dhgroup": "ffdhe8192" 00:17:19.478 } 00:17:19.478 } 00:17:19.478 ]' 00:17:19.478 09:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.478 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.478 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:19.737 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:01:ZjRkOTI1MjIzZGQ2ZjY0MjczNTMwODE4NTk0ODliYmZbeHO0: 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.304 09:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.563 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.132 00:17:21.132 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.132 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.132 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.390 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.390 { 00:17:21.390 "cntlid": 143, 00:17:21.390 "qid": 0, 00:17:21.390 "state": "enabled", 00:17:21.390 "thread": "nvmf_tgt_poll_group_000", 00:17:21.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.390 "listen_address": { 00:17:21.390 "trtype": "TCP", 00:17:21.391 "adrfam": "IPv4", 00:17:21.391 "traddr": "10.0.0.2", 00:17:21.391 "trsvcid": "4420" 00:17:21.391 }, 00:17:21.391 "peer_address": { 00:17:21.391 "trtype": "TCP", 00:17:21.391 "adrfam": "IPv4", 00:17:21.391 "traddr": "10.0.0.1", 00:17:21.391 "trsvcid": "56904" 00:17:21.391 }, 00:17:21.391 "auth": { 00:17:21.391 "state": "completed", 00:17:21.391 "digest": "sha512", 00:17:21.391 "dhgroup": "ffdhe8192" 00:17:21.391 } 00:17:21.391 } 00:17:21.391 ]' 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.391 09:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.649 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:21.649 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.216 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.475 09:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.042 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.043 { 00:17:23.043 "cntlid": 145, 00:17:23.043 "qid": 0, 00:17:23.043 "state": "enabled", 00:17:23.043 "thread": "nvmf_tgt_poll_group_000", 00:17:23.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.043 "listen_address": { 00:17:23.043 "trtype": "TCP", 00:17:23.043 "adrfam": "IPv4", 00:17:23.043 "traddr": "10.0.0.2", 00:17:23.043 "trsvcid": "4420" 00:17:23.043 }, 00:17:23.043 "peer_address": { 00:17:23.043 "trtype": "TCP", 00:17:23.043 "adrfam": "IPv4", 00:17:23.043 "traddr": "10.0.0.1", 00:17:23.043 "trsvcid": "56934" 00:17:23.043 }, 00:17:23.043 "auth": { 00:17:23.043 "state": "completed", 00:17:23.043 "digest": "sha512", 00:17:23.043 "dhgroup": "ffdhe8192" 00:17:23.043 } 00:17:23.043 } 00:17:23.043 ]' 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.043 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.301 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:23.302 09:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzQwYTY1OTIzYzU3YzMwN2FlNzVjZDI1MTg4YTBjNTM4MzY2N2QxNzRkMjA5ODlkDYXRLg==: --dhchap-ctrl-secret DHHC-1:03:YjNhNzJmNzY2ZDc0MDAwMDkzNDhiZGYyMDFhYjA2MzA1N2Y4MWZmMWMyMjYwOGE5YTA4ODUwNDY5OTU4MjFmMsXMvJU=: 00:17:23.869 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.869 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.869 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.130 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:24.131 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:24.390 request: 00:17:24.390 { 00:17:24.390 "name": "nvme0", 00:17:24.390 "trtype": "tcp", 00:17:24.390 "traddr": "10.0.0.2", 00:17:24.390 "adrfam": "ipv4", 00:17:24.390 "trsvcid": "4420", 00:17:24.390 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.390 "prchk_reftag": false, 00:17:24.390 "prchk_guard": false, 00:17:24.391 "hdgst": false, 00:17:24.391 "ddgst": false, 00:17:24.391 "dhchap_key": "key2", 00:17:24.391 "allow_unrecognized_csi": false, 00:17:24.391 "method": "bdev_nvme_attach_controller", 00:17:24.391 "req_id": 1 00:17:24.391 } 00:17:24.391 Got JSON-RPC error response 00:17:24.391 response: 00:17:24.391 { 00:17:24.391 "code": -5, 00:17:24.391 "message": "Input/output error" 00:17:24.391 } 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.391 09:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.959 request: 00:17:24.959 { 00:17:24.959 "name": "nvme0", 00:17:24.959 "trtype": "tcp", 00:17:24.959 "traddr": "10.0.0.2", 00:17:24.959 "adrfam": "ipv4", 00:17:24.959 "trsvcid": "4420", 00:17:24.959 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.959 "prchk_reftag": false, 00:17:24.959 "prchk_guard": false, 00:17:24.959 "hdgst": false, 00:17:24.959 "ddgst": false, 00:17:24.959 "dhchap_key": "key1", 00:17:24.959 "dhchap_ctrlr_key": "ckey2", 00:17:24.959 "allow_unrecognized_csi": false, 00:17:24.959 "method": "bdev_nvme_attach_controller", 00:17:24.959 "req_id": 1 00:17:24.959 } 00:17:24.959 Got JSON-RPC error response 00:17:24.959 response: 00:17:24.959 { 00:17:24.959 "code": -5, 00:17:24.959 "message": "Input/output error" 00:17:24.959 } 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.959 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.960 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.529 request: 00:17:25.529 { 00:17:25.529 "name": "nvme0", 00:17:25.529 "trtype": "tcp", 00:17:25.529 "traddr": "10.0.0.2", 00:17:25.529 "adrfam": "ipv4", 00:17:25.529 "trsvcid": "4420", 00:17:25.529 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:25.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:25.529 "prchk_reftag": false, 00:17:25.529 "prchk_guard": false, 00:17:25.529 "hdgst": false, 00:17:25.529 "ddgst": false, 00:17:25.529 "dhchap_key": "key1", 00:17:25.529 "dhchap_ctrlr_key": "ckey1", 00:17:25.529 "allow_unrecognized_csi": false, 00:17:25.529 "method": "bdev_nvme_attach_controller", 00:17:25.529 "req_id": 1 00:17:25.529 } 00:17:25.529 Got JSON-RPC error response 00:17:25.529 response: 00:17:25.529 { 00:17:25.529 "code": -5, 00:17:25.529 "message": "Input/output error" 00:17:25.529 } 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2634567 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2634567 ']' 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2634567 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634567 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634567' 00:17:25.529 killing process with pid 2634567 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2634567 00:17:25.529 09:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2634567 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2656564 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2656564 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2656564 ']' 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.529 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.788 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.788 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:25.788 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.788 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.788 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2656564 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2656564 ']' 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.789 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.048 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.048 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:26.048 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:26.048 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.048 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.048 null0 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6Qz 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vRr ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vRr 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kYv 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.vgI ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vgI 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.d9Z 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.N1u ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.N1u 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xhE 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.308 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.309 09:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.874 nvme0n1 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.134 { 00:17:27.134 "cntlid": 1, 00:17:27.134 "qid": 0, 00:17:27.134 "state": "enabled", 00:17:27.134 "thread": "nvmf_tgt_poll_group_000", 00:17:27.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.134 "listen_address": { 00:17:27.134 "trtype": "TCP", 00:17:27.134 "adrfam": "IPv4", 00:17:27.134 "traddr": "10.0.0.2", 00:17:27.134 "trsvcid": "4420" 00:17:27.134 }, 00:17:27.134 "peer_address": { 00:17:27.134 "trtype": "TCP", 00:17:27.134 "adrfam": "IPv4", 00:17:27.134 "traddr": "10.0.0.1", 00:17:27.134 "trsvcid": "59720" 00:17:27.134 }, 00:17:27.134 "auth": { 00:17:27.134 "state": "completed", 00:17:27.134 "digest": "sha512", 00:17:27.134 "dhgroup": "ffdhe8192" 00:17:27.134 } 00:17:27.134 } 00:17:27.134 ]' 00:17:27.134 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.393 09:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.652 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:27.652 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:28.219 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.219 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.219 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:28.220 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.478 09:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.478 request: 00:17:28.478 { 00:17:28.478 "name": "nvme0", 00:17:28.478 "trtype": "tcp", 00:17:28.478 "traddr": "10.0.0.2", 00:17:28.478 "adrfam": "ipv4", 00:17:28.478 "trsvcid": "4420", 00:17:28.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.478 "prchk_reftag": false, 00:17:28.478 "prchk_guard": false, 00:17:28.478 "hdgst": false, 00:17:28.478 "ddgst": false, 00:17:28.478 "dhchap_key": "key3", 00:17:28.478 "allow_unrecognized_csi": false, 00:17:28.478 "method": "bdev_nvme_attach_controller", 00:17:28.478 "req_id": 1 00:17:28.478 } 00:17:28.478 Got JSON-RPC error response 00:17:28.478 response: 00:17:28.478 { 00:17:28.478 "code": -5, 00:17:28.478 "message": "Input/output error" 00:17:28.478 } 00:17:28.478 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:28.478 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.478 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.478 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.738 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.997 request: 00:17:28.997 { 00:17:28.997 "name": "nvme0", 00:17:28.997 "trtype": "tcp", 00:17:28.997 "traddr": "10.0.0.2", 00:17:28.997 "adrfam": "ipv4", 00:17:28.997 "trsvcid": "4420", 00:17:28.997 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.997 "prchk_reftag": false, 00:17:28.997 "prchk_guard": false, 00:17:28.997 "hdgst": false, 00:17:28.997 "ddgst": false, 00:17:28.997 "dhchap_key": "key3", 00:17:28.997 "allow_unrecognized_csi": false, 00:17:28.997 "method": "bdev_nvme_attach_controller", 00:17:28.997 "req_id": 1 00:17:28.997 } 00:17:28.997 Got JSON-RPC error response 00:17:28.997 response: 00:17:28.997 { 00:17:28.997 "code": -5, 00:17:28.997 "message": "Input/output error" 00:17:28.997 } 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.997 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.257 09:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:29.516 request: 00:17:29.516 { 00:17:29.516 "name": "nvme0", 00:17:29.516 "trtype": "tcp", 00:17:29.516 "traddr": "10.0.0.2", 00:17:29.516 "adrfam": "ipv4", 00:17:29.516 "trsvcid": "4420", 00:17:29.516 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.516 "prchk_reftag": false, 00:17:29.516 "prchk_guard": false, 00:17:29.516 "hdgst": false, 00:17:29.516 "ddgst": false, 00:17:29.516 "dhchap_key": "key0", 00:17:29.516 "dhchap_ctrlr_key": "key1", 00:17:29.516 "allow_unrecognized_csi": false, 00:17:29.516 "method": "bdev_nvme_attach_controller", 00:17:29.516 "req_id": 1 00:17:29.516 } 00:17:29.516 Got JSON-RPC error response 00:17:29.516 response: 00:17:29.516 { 00:17:29.516 "code": -5, 00:17:29.516 "message": "Input/output error" 00:17:29.516 } 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:29.516 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:29.803 nvme0n1 00:17:29.803 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:29.803 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:29.803 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.062 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.062 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.062 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:30.321 09:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:30.889 nvme0n1 00:17:30.889 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:30.889 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:30.889 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:31.148 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.407 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.407 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:31.407 09:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: --dhchap-ctrl-secret DHHC-1:03:ZmJiMjA2MmZhY2Q5M2NhYzQ2NDViMjU3OWI2YWZlOTg2NWQzNzFmYWUxMTkxNjE2NDMzZWM5MWRmY2RmODRjMtCOy9g=: 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.974 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:32.241 09:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:32.500 request: 00:17:32.501 { 00:17:32.501 "name": "nvme0", 00:17:32.501 "trtype": "tcp", 00:17:32.501 "traddr": "10.0.0.2", 00:17:32.501 "adrfam": "ipv4", 00:17:32.501 "trsvcid": "4420", 00:17:32.501 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.501 "prchk_reftag": false, 00:17:32.501 "prchk_guard": false, 00:17:32.501 "hdgst": false, 00:17:32.501 "ddgst": false, 00:17:32.501 "dhchap_key": "key1", 00:17:32.501 "allow_unrecognized_csi": false, 00:17:32.501 "method": "bdev_nvme_attach_controller", 00:17:32.501 "req_id": 1 00:17:32.501 } 00:17:32.501 Got JSON-RPC error response 00:17:32.501 response: 00:17:32.501 { 00:17:32.501 "code": -5, 00:17:32.501 "message": "Input/output error" 00:17:32.501 } 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:32.501 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:33.437 nvme0n1 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.437 09:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:33.695 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:33.954 nvme0n1 00:17:33.954 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:33.954 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:33.954 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.212 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.212 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.212 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: '' 2s 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: ]] 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmYxNmVkM2EyZjk0Y2E1MGEyNjg1NDU5MmM1YTJkMWWxAyCI: 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:34.471 09:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: 2s 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: ]] 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDY5NmNhN2NiMmVhZTgzNDM0Zjc5N2YzNGVmNTM5ODgyNTcwYzA2NGRhODgyODVkkOlMcg==: 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:36.376 09:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.911 09:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:39.170 nvme0n1 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.170 09:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.738 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:39.738 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:39.738 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:39.997 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:40.256 09:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:40.823 request: 00:17:40.823 { 00:17:40.823 "name": "nvme0", 00:17:40.823 "dhchap_key": "key1", 00:17:40.823 "dhchap_ctrlr_key": "key3", 00:17:40.823 "method": "bdev_nvme_set_keys", 00:17:40.823 "req_id": 1 00:17:40.823 } 00:17:40.823 Got JSON-RPC error response 00:17:40.823 response: 00:17:40.823 { 00:17:40.823 "code": -13, 00:17:40.823 "message": "Permission denied" 00:17:40.823 } 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:40.823 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.129 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:41.129 09:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.134 09:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.070 nvme0n1 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.071 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.329 request: 00:17:43.329 { 00:17:43.329 "name": "nvme0", 00:17:43.329 "dhchap_key": "key2", 00:17:43.329 "dhchap_ctrlr_key": "key0", 00:17:43.329 "method": "bdev_nvme_set_keys", 00:17:43.329 "req_id": 1 00:17:43.329 } 00:17:43.329 Got JSON-RPC error response 00:17:43.329 response: 00:17:43.329 { 00:17:43.329 "code": -13, 00:17:43.329 "message": "Permission denied" 00:17:43.329 } 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:43.329 09:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.588 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:43.588 09:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:44.525 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:44.525 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:44.525 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2634625 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2634625 ']' 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2634625 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2634625 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2634625' 00:17:44.785 killing process with pid 2634625 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2634625 00:17:44.785 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2634625 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.354 rmmod nvme_tcp 00:17:45.354 rmmod nvme_fabrics 00:17:45.354 rmmod nvme_keyring 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2656564 ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2656564 ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656564' 00:17:45.354 killing process with pid 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2656564 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.354 09:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.890 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:47.890 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6Qz /tmp/spdk.key-sha256.kYv /tmp/spdk.key-sha384.d9Z /tmp/spdk.key-sha512.xhE /tmp/spdk.key-sha512.vRr /tmp/spdk.key-sha384.vgI /tmp/spdk.key-sha256.N1u '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:47.890 00:17:47.890 real 2m31.270s 00:17:47.890 user 5m48.611s 00:17:47.890 sys 0m24.008s 00:17:47.890 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.890 09:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.890 ************************************ 00:17:47.890 END TEST nvmf_auth_target 00:17:47.890 ************************************ 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.890 ************************************ 00:17:47.890 START TEST nvmf_bdevio_no_huge 00:17:47.890 ************************************ 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.890 * Looking for test storage... 00:17:47.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.890 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:47.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.891 --rc genhtml_branch_coverage=1 00:17:47.891 --rc genhtml_function_coverage=1 00:17:47.891 --rc genhtml_legend=1 00:17:47.891 --rc geninfo_all_blocks=1 00:17:47.891 --rc geninfo_unexecuted_blocks=1 00:17:47.891 00:17:47.891 ' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:47.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.891 --rc genhtml_branch_coverage=1 00:17:47.891 --rc genhtml_function_coverage=1 00:17:47.891 --rc genhtml_legend=1 00:17:47.891 --rc geninfo_all_blocks=1 00:17:47.891 --rc geninfo_unexecuted_blocks=1 00:17:47.891 00:17:47.891 ' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:47.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.891 --rc genhtml_branch_coverage=1 00:17:47.891 --rc genhtml_function_coverage=1 00:17:47.891 --rc genhtml_legend=1 00:17:47.891 --rc geninfo_all_blocks=1 00:17:47.891 --rc geninfo_unexecuted_blocks=1 00:17:47.891 00:17:47.891 ' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:47.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.891 --rc genhtml_branch_coverage=1 00:17:47.891 --rc genhtml_function_coverage=1 00:17:47.891 --rc genhtml_legend=1 00:17:47.891 --rc geninfo_all_blocks=1 00:17:47.891 --rc geninfo_unexecuted_blocks=1 00:17:47.891 00:17:47.891 ' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.891 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:47.892 09:56:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:54.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:54.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:54.463 Found net devices under 0000:86:00.0: cvl_0_0 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:54.463 Found net devices under 0000:86:00.1: cvl_0_1 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.463 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.464 09:56:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:17:54.464 00:17:54.464 --- 10.0.0.2 ping statistics --- 00:17:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.464 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:54.464 00:17:54.464 --- 10.0.0.1 ping statistics --- 00:17:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.464 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2663451 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2663451 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2663451 ']' 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.464 09:56:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.464 [2024-11-20 09:56:27.268433] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:17:54.464 [2024-11-20 09:56:27.268479] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:54.464 [2024-11-20 09:56:27.352491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.464 [2024-11-20 09:56:27.398620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.464 [2024-11-20 09:56:27.398654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.464 [2024-11-20 09:56:27.398661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.464 [2024-11-20 09:56:27.398667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.464 [2024-11-20 09:56:27.398672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.464 [2024-11-20 09:56:27.399843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:54.464 [2024-11-20 09:56:27.399950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:54.464 [2024-11-20 09:56:27.400057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.464 [2024-11-20 09:56:27.400058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 [2024-11-20 09:56:28.148582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 Malloc0 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 [2024-11-20 09:56:28.192864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:54.722 { 00:17:54.722 "params": { 00:17:54.722 "name": "Nvme$subsystem", 00:17:54.722 "trtype": "$TEST_TRANSPORT", 00:17:54.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.722 "adrfam": "ipv4", 00:17:54.722 "trsvcid": "$NVMF_PORT", 00:17:54.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.722 "hdgst": ${hdgst:-false}, 00:17:54.722 "ddgst": ${ddgst:-false} 00:17:54.722 }, 00:17:54.722 "method": "bdev_nvme_attach_controller" 00:17:54.722 } 00:17:54.722 EOF 00:17:54.722 )") 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:54.722 09:56:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:54.722 "params": { 00:17:54.722 "name": "Nvme1", 00:17:54.722 "trtype": "tcp", 00:17:54.722 "traddr": "10.0.0.2", 00:17:54.722 "adrfam": "ipv4", 00:17:54.722 "trsvcid": "4420", 00:17:54.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.722 "hdgst": false, 00:17:54.722 "ddgst": false 00:17:54.722 }, 00:17:54.722 "method": "bdev_nvme_attach_controller" 00:17:54.722 }' 00:17:54.722 [2024-11-20 09:56:28.242726] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:17:54.723 [2024-11-20 09:56:28.242769] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2663540 ] 00:17:54.979 [2024-11-20 09:56:28.321730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.979 [2024-11-20 09:56:28.369932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.979 [2024-11-20 09:56:28.370042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.979 [2024-11-20 09:56:28.370042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.235 I/O targets: 00:17:55.235 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:55.235 00:17:55.235 00:17:55.235 CUnit - A unit testing framework for C - Version 2.1-3 00:17:55.235 http://cunit.sourceforge.net/ 00:17:55.235 00:17:55.235 00:17:55.236 Suite: bdevio tests on: Nvme1n1 00:17:55.236 Test: blockdev write read block ...passed 00:17:55.236 Test: blockdev write zeroes read block ...passed 00:17:55.236 Test: blockdev write zeroes read no split ...passed 00:17:55.236 Test: blockdev write zeroes read split ...passed 00:17:55.492 Test: blockdev write zeroes read split partial ...passed 00:17:55.492 Test: blockdev reset ...[2024-11-20 09:56:28.855995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:55.492 [2024-11-20 09:56:28.856059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb49920 (9): Bad file descriptor 00:17:55.492 [2024-11-20 09:56:28.871301] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:55.492 passed 00:17:55.492 Test: blockdev write read 8 blocks ...passed 00:17:55.492 Test: blockdev write read size > 128k ...passed 00:17:55.492 Test: blockdev write read invalid size ...passed 00:17:55.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:55.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:55.492 Test: blockdev write read max offset ...passed 00:17:55.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:55.772 Test: blockdev writev readv 8 blocks ...passed 00:17:55.772 Test: blockdev writev readv 30 x 1block ...passed 00:17:55.772 Test: blockdev writev readv block ...passed 00:17:55.772 Test: blockdev writev readv size > 128k ...passed 00:17:55.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:55.772 Test: blockdev comparev and writev ...[2024-11-20 09:56:29.122008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.772 [2024-11-20 09:56:29.122822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.772 [2024-11-20 09:56:29.122830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.772 passed 00:17:55.772 Test: blockdev nvme passthru rw ...passed 00:17:55.773 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:56:29.204591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.773 [2024-11-20 09:56:29.204607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.773 [2024-11-20 09:56:29.204718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.773 [2024-11-20 09:56:29.204728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.773 [2024-11-20 09:56:29.204827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.773 [2024-11-20 09:56:29.204836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.773 [2024-11-20 09:56:29.204937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.773 [2024-11-20 09:56:29.204946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.773 passed 00:17:55.773 Test: blockdev nvme admin passthru ...passed 00:17:55.773 Test: blockdev copy ...passed 00:17:55.773 00:17:55.773 Run Summary: Type Total Ran Passed Failed Inactive 00:17:55.773 suites 1 1 n/a 0 0 00:17:55.773 tests 23 23 23 0 0 00:17:55.773 asserts 152 152 152 0 n/a 00:17:55.773 00:17:55.773 Elapsed time = 1.224 seconds 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.029 rmmod nvme_tcp 00:17:56.029 rmmod nvme_fabrics 00:17:56.029 rmmod nvme_keyring 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2663451 ']' 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2663451 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2663451 ']' 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2663451 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.029 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2663451 00:17:56.286 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:56.286 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:56.286 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2663451' 00:17:56.286 killing process with pid 2663451 00:17:56.286 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2663451 00:17:56.286 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2663451 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.544 09:56:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.446 09:56:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:58.446 00:17:58.446 real 0m10.956s 00:17:58.446 user 0m14.105s 00:17:58.446 sys 0m5.443s 00:17:58.446 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.446 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.446 ************************************ 00:17:58.446 END TEST nvmf_bdevio_no_huge 00:17:58.446 ************************************ 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.706 ************************************ 00:17:58.706 START TEST nvmf_tls 00:17:58.706 ************************************ 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:58.706 * Looking for test storage... 00:17:58.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.706 --rc genhtml_branch_coverage=1 00:17:58.706 --rc genhtml_function_coverage=1 00:17:58.706 --rc genhtml_legend=1 00:17:58.706 --rc geninfo_all_blocks=1 00:17:58.706 --rc geninfo_unexecuted_blocks=1 00:17:58.706 00:17:58.706 ' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.706 --rc genhtml_branch_coverage=1 00:17:58.706 --rc genhtml_function_coverage=1 00:17:58.706 --rc genhtml_legend=1 00:17:58.706 --rc geninfo_all_blocks=1 00:17:58.706 --rc geninfo_unexecuted_blocks=1 00:17:58.706 00:17:58.706 ' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.706 --rc genhtml_branch_coverage=1 00:17:58.706 --rc genhtml_function_coverage=1 00:17:58.706 --rc genhtml_legend=1 00:17:58.706 --rc geninfo_all_blocks=1 00:17:58.706 --rc geninfo_unexecuted_blocks=1 00:17:58.706 00:17:58.706 ' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.706 --rc genhtml_branch_coverage=1 00:17:58.706 --rc genhtml_function_coverage=1 00:17:58.706 --rc genhtml_legend=1 00:17:58.706 --rc geninfo_all_blocks=1 00:17:58.706 --rc geninfo_unexecuted_blocks=1 00:17:58.706 00:17:58.706 ' 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:58.706 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:58.707 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:58.967 09:56:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:05.537 09:56:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:05.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:05.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.537 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:05.538 Found net devices under 0000:86:00.0: cvl_0_0 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:05.538 Found net devices under 0000:86:00.1: cvl_0_1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:18:05.538 00:18:05.538 --- 10.0.0.2 ping statistics --- 00:18:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.538 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:05.538 00:18:05.538 --- 10.0.0.1 ping statistics --- 00:18:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.538 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2667356 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2667356 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2667356 ']' 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.538 09:56:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.538 [2024-11-20 09:56:38.362527] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:05.538 [2024-11-20 09:56:38.362571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.538 [2024-11-20 09:56:38.442516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.538 [2024-11-20 09:56:38.482852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.538 [2024-11-20 09:56:38.482886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.538 [2024-11-20 09:56:38.482893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.538 [2024-11-20 09:56:38.482899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.538 [2024-11-20 09:56:38.482904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.538 [2024-11-20 09:56:38.483471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.802 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.802 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.802 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.803 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.803 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.803 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.803 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:05.803 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:06.063 true 00:18:06.063 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.063 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:06.063 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:06.063 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:06.063 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:06.320 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.320 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:06.579 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:06.579 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:06.579 09:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:06.579 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.579 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:06.837 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:06.837 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:06.837 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.837 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:07.095 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:07.095 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:07.095 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:07.354 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.354 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:07.354 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:07.354 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:07.354 09:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:07.613 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.613 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.fGXqrDF2Mb 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.by3auwdbCe 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fGXqrDF2Mb 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.by3auwdbCe 00:18:07.871 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:08.129 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:08.387 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.fGXqrDF2Mb 00:18:08.387 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fGXqrDF2Mb 00:18:08.387 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:08.387 [2024-11-20 09:56:41.937250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.387 09:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:08.645 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:08.902 [2024-11-20 09:56:42.294146] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.902 [2024-11-20 09:56:42.294368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.902 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.902 malloc0 00:18:09.161 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:09.161 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fGXqrDF2Mb 00:18:09.418 09:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.675 09:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.fGXqrDF2Mb 00:18:19.644 Initializing NVMe Controllers 00:18:19.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:19.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:19.644 Initialization complete. Launching workers. 00:18:19.644 ======================================================== 00:18:19.644 Latency(us) 00:18:19.644 Device Information : IOPS MiB/s Average min max 00:18:19.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16853.84 65.84 3797.41 889.92 5867.51 00:18:19.644 ======================================================== 00:18:19.645 Total : 16853.84 65.84 3797.41 889.92 5867.51 00:18:19.645 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fGXqrDF2Mb 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fGXqrDF2Mb 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2669812 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2669812 /var/tmp/bdevperf.sock 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2669812 ']' 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.645 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.645 [2024-11-20 09:56:53.222400] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:19.645 [2024-11-20 09:56:53.222451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669812 ] 00:18:19.903 [2024-11-20 09:56:53.294320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.903 [2024-11-20 09:56:53.336219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.903 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.903 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.903 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fGXqrDF2Mb 00:18:20.161 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.420 [2024-11-20 09:56:53.786349] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.420 TLSTESTn1 00:18:20.420 09:56:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.420 Running I/O for 10 seconds... 00:18:22.730 5722.00 IOPS, 22.35 MiB/s [2024-11-20T08:56:57.247Z] 5736.00 IOPS, 22.41 MiB/s [2024-11-20T08:56:58.181Z] 5730.67 IOPS, 22.39 MiB/s [2024-11-20T08:56:59.112Z] 5759.25 IOPS, 22.50 MiB/s [2024-11-20T08:57:00.047Z] 5759.20 IOPS, 22.50 MiB/s [2024-11-20T08:57:00.985Z] 5785.50 IOPS, 22.60 MiB/s [2024-11-20T08:57:02.360Z] 5774.57 IOPS, 22.56 MiB/s [2024-11-20T08:57:03.293Z] 5770.50 IOPS, 22.54 MiB/s [2024-11-20T08:57:04.227Z] 5776.67 IOPS, 22.57 MiB/s [2024-11-20T08:57:04.227Z] 5789.30 IOPS, 22.61 MiB/s 00:18:30.645 Latency(us) 00:18:30.645 [2024-11-20T08:57:04.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.645 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.645 Verification LBA range: start 0x0 length 0x2000 00:18:30.645 TLSTESTn1 : 10.01 5794.49 22.63 0.00 0.00 22057.29 4930.80 20971.52 00:18:30.645 [2024-11-20T08:57:04.227Z] =================================================================================================================== 00:18:30.645 [2024-11-20T08:57:04.227Z] Total : 5794.49 22.63 0.00 0.00 22057.29 4930.80 20971.52 00:18:30.645 { 00:18:30.645 "results": [ 00:18:30.645 { 00:18:30.645 "job": "TLSTESTn1", 00:18:30.645 "core_mask": "0x4", 00:18:30.645 "workload": "verify", 00:18:30.645 "status": "finished", 00:18:30.645 "verify_range": { 00:18:30.645 "start": 0, 00:18:30.645 "length": 8192 00:18:30.645 }, 00:18:30.645 "queue_depth": 128, 00:18:30.645 "io_size": 4096, 00:18:30.645 "runtime": 10.012443, 00:18:30.645 "iops": 5794.489916197275, 00:18:30.645 "mibps": 22.634726235145607, 00:18:30.645 "io_failed": 0, 00:18:30.645 "io_timeout": 0, 00:18:30.645 "avg_latency_us": 22057.287970603036, 00:18:30.645 "min_latency_us": 4930.80380952381, 00:18:30.645 "max_latency_us": 20971.52 00:18:30.645 } 00:18:30.645 ], 00:18:30.645 "core_count": 1 00:18:30.645 } 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2669812 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2669812 ']' 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2669812 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669812 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669812' 00:18:30.645 killing process with pid 2669812 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2669812 00:18:30.645 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.645 00:18:30.645 Latency(us) 00:18:30.645 [2024-11-20T08:57:04.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.645 [2024-11-20T08:57:04.227Z] =================================================================================================================== 00:18:30.645 [2024-11-20T08:57:04.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.645 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2669812 00:18:30.903 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.by3auwdbCe 00:18:30.903 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.by3auwdbCe 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.by3auwdbCe 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.by3auwdbCe 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2671777 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2671777 /var/tmp/bdevperf.sock 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2671777 ']' 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.904 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.904 [2024-11-20 09:57:04.279999] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:30.904 [2024-11-20 09:57:04.280050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671777 ] 00:18:30.904 [2024-11-20 09:57:04.352542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.904 [2024-11-20 09:57:04.395939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.162 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.162 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.162 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.by3auwdbCe 00:18:31.162 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.421 [2024-11-20 09:57:04.853668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.421 [2024-11-20 09:57:04.860419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:31.421 [2024-11-20 09:57:04.860994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (107): Transport endpoint is not connected 00:18:31.421 [2024-11-20 09:57:04.861988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (9): Bad file descriptor 00:18:31.421 [2024-11-20 09:57:04.862990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:31.421 [2024-11-20 09:57:04.862999] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:31.421 [2024-11-20 09:57:04.863006] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:31.421 [2024-11-20 09:57:04.863016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:31.421 request: 00:18:31.421 { 00:18:31.421 "name": "TLSTEST", 00:18:31.421 "trtype": "tcp", 00:18:31.421 "traddr": "10.0.0.2", 00:18:31.421 "adrfam": "ipv4", 00:18:31.421 "trsvcid": "4420", 00:18:31.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.421 "prchk_reftag": false, 00:18:31.421 "prchk_guard": false, 00:18:31.421 "hdgst": false, 00:18:31.421 "ddgst": false, 00:18:31.421 "psk": "key0", 00:18:31.421 "allow_unrecognized_csi": false, 00:18:31.421 "method": "bdev_nvme_attach_controller", 00:18:31.421 "req_id": 1 00:18:31.421 } 00:18:31.421 Got JSON-RPC error response 00:18:31.421 response: 00:18:31.421 { 00:18:31.421 "code": -5, 00:18:31.421 "message": "Input/output error" 00:18:31.421 } 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2671777 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2671777 ']' 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2671777 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671777 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671777' 00:18:31.421 killing process with pid 2671777 00:18:31.421 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2671777 00:18:31.422 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.422 00:18:31.422 Latency(us) 00:18:31.422 [2024-11-20T08:57:05.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.422 [2024-11-20T08:57:05.004Z] =================================================================================================================== 00:18:31.422 [2024-11-20T08:57:05.004Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.422 09:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2671777 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fGXqrDF2Mb 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fGXqrDF2Mb 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fGXqrDF2Mb 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fGXqrDF2Mb 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2671995 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2671995 /var/tmp/bdevperf.sock 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2671995 ']' 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.680 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.680 [2024-11-20 09:57:05.148432] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:31.680 [2024-11-20 09:57:05.148478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671995 ] 00:18:31.680 [2024-11-20 09:57:05.222397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.939 [2024-11-20 09:57:05.264636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.939 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.939 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.939 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fGXqrDF2Mb 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:32.198 [2024-11-20 09:57:05.714979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.198 [2024-11-20 09:57:05.722522] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:32.198 [2024-11-20 09:57:05.722544] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:32.198 [2024-11-20 09:57:05.722582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:32.198 [2024-11-20 09:57:05.723344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04170 (107): Transport endpoint is not connected 00:18:32.198 [2024-11-20 09:57:05.724337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04170 (9): Bad file descriptor 00:18:32.198 [2024-11-20 09:57:05.725339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:32.198 [2024-11-20 09:57:05.725349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:32.198 [2024-11-20 09:57:05.725355] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:32.198 [2024-11-20 09:57:05.725369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:32.198 request: 00:18:32.198 { 00:18:32.198 "name": "TLSTEST", 00:18:32.198 "trtype": "tcp", 00:18:32.198 "traddr": "10.0.0.2", 00:18:32.198 "adrfam": "ipv4", 00:18:32.198 "trsvcid": "4420", 00:18:32.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:32.198 "prchk_reftag": false, 00:18:32.198 "prchk_guard": false, 00:18:32.198 "hdgst": false, 00:18:32.198 "ddgst": false, 00:18:32.198 "psk": "key0", 00:18:32.198 "allow_unrecognized_csi": false, 00:18:32.198 "method": "bdev_nvme_attach_controller", 00:18:32.198 "req_id": 1 00:18:32.198 } 00:18:32.198 Got JSON-RPC error response 00:18:32.198 response: 00:18:32.198 { 00:18:32.198 "code": -5, 00:18:32.198 "message": "Input/output error" 00:18:32.198 } 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2671995 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2671995 ']' 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2671995 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.198 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671995 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671995' 00:18:32.457 killing process with pid 2671995 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2671995 00:18:32.457 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.457 00:18:32.457 Latency(us) 00:18:32.457 [2024-11-20T08:57:06.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.457 [2024-11-20T08:57:06.039Z] =================================================================================================================== 00:18:32.457 [2024-11-20T08:57:06.039Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2671995 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fGXqrDF2Mb 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fGXqrDF2Mb 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fGXqrDF2Mb 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fGXqrDF2Mb 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2672303 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2672303 /var/tmp/bdevperf.sock 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2672303 ']' 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.457 09:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.457 [2024-11-20 09:57:05.993959] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:32.457 [2024-11-20 09:57:05.994007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672303 ] 00:18:32.716 [2024-11-20 09:57:06.070376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.716 [2024-11-20 09:57:06.111911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.716 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.716 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.716 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fGXqrDF2Mb 00:18:32.975 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.235 [2024-11-20 09:57:06.577556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.235 [2024-11-20 09:57:06.584712] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.235 [2024-11-20 09:57:06.584734] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.235 [2024-11-20 09:57:06.584772] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.235 [2024-11-20 09:57:06.584881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9170 (107): Transport endpoint is not connected 00:18:33.235 [2024-11-20 09:57:06.585875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9170 (9): Bad file descriptor 00:18:33.235 [2024-11-20 09:57:06.586876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:33.235 [2024-11-20 09:57:06.586892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:33.235 [2024-11-20 09:57:06.586898] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:33.235 [2024-11-20 09:57:06.586908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:33.235 request: 00:18:33.235 { 00:18:33.235 "name": "TLSTEST", 00:18:33.235 "trtype": "tcp", 00:18:33.235 "traddr": "10.0.0.2", 00:18:33.235 "adrfam": "ipv4", 00:18:33.235 "trsvcid": "4420", 00:18:33.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:33.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.235 "prchk_reftag": false, 00:18:33.235 "prchk_guard": false, 00:18:33.235 "hdgst": false, 00:18:33.235 "ddgst": false, 00:18:33.235 "psk": "key0", 00:18:33.235 "allow_unrecognized_csi": false, 00:18:33.235 "method": "bdev_nvme_attach_controller", 00:18:33.235 "req_id": 1 00:18:33.235 } 00:18:33.235 Got JSON-RPC error response 00:18:33.235 response: 00:18:33.235 { 00:18:33.235 "code": -5, 00:18:33.235 "message": "Input/output error" 00:18:33.235 } 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2672303 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2672303 ']' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2672303 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672303 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672303' 00:18:33.235 killing process with pid 2672303 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2672303 00:18:33.235 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.235 00:18:33.235 Latency(us) 00:18:33.235 [2024-11-20T08:57:06.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.235 [2024-11-20T08:57:06.817Z] =================================================================================================================== 00:18:33.235 [2024-11-20T08:57:06.817Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2672303 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2672642 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2672642 /var/tmp/bdevperf.sock 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2672642 ']' 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.235 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.495 09:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.495 [2024-11-20 09:57:06.855186] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:33.495 [2024-11-20 09:57:06.855237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672642 ] 00:18:33.495 [2024-11-20 09:57:06.916745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.495 [2024-11-20 09:57:06.958557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.495 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.495 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.495 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:33.754 [2024-11-20 09:57:07.209001] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:33.754 [2024-11-20 09:57:07.209027] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:33.754 request: 00:18:33.754 { 00:18:33.754 "name": "key0", 00:18:33.754 "path": "", 00:18:33.754 "method": "keyring_file_add_key", 00:18:33.754 "req_id": 1 00:18:33.754 } 00:18:33.754 Got JSON-RPC error response 00:18:33.754 response: 00:18:33.754 { 00:18:33.754 "code": -1, 00:18:33.754 "message": "Operation not permitted" 00:18:33.754 } 00:18:33.754 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.013 [2024-11-20 09:57:07.413618] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.013 [2024-11-20 09:57:07.413651] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:34.013 request: 00:18:34.013 { 00:18:34.013 "name": "TLSTEST", 00:18:34.013 "trtype": "tcp", 00:18:34.013 "traddr": "10.0.0.2", 00:18:34.013 "adrfam": "ipv4", 00:18:34.013 "trsvcid": "4420", 00:18:34.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.013 "prchk_reftag": false, 00:18:34.013 "prchk_guard": false, 00:18:34.013 "hdgst": false, 00:18:34.013 "ddgst": false, 00:18:34.013 "psk": "key0", 00:18:34.013 "allow_unrecognized_csi": false, 00:18:34.013 "method": "bdev_nvme_attach_controller", 00:18:34.013 "req_id": 1 00:18:34.013 } 00:18:34.013 Got JSON-RPC error response 00:18:34.013 response: 00:18:34.013 { 00:18:34.013 "code": -126, 00:18:34.013 "message": "Required key not available" 00:18:34.013 } 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2672642 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2672642 ']' 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2672642 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672642 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672642' 00:18:34.013 killing process with pid 2672642 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2672642 00:18:34.013 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.013 00:18:34.013 Latency(us) 00:18:34.013 [2024-11-20T08:57:07.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.013 [2024-11-20T08:57:07.595Z] =================================================================================================================== 00:18:34.013 [2024-11-20T08:57:07.595Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.013 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2672642 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2667356 ']' 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667356' 00:18:34.273 killing process with pid 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2667356 00:18:34.273 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Q1ZPp63qoV 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Q1ZPp63qoV 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2672754 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2672754 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2672754 ']' 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.532 09:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.532 [2024-11-20 09:57:07.955136] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:34.532 [2024-11-20 09:57:07.955188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.532 [2024-11-20 09:57:08.034274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.532 [2024-11-20 09:57:08.074842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.532 [2024-11-20 09:57:08.074877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.532 [2024-11-20 09:57:08.074885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.532 [2024-11-20 09:57:08.074893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.532 [2024-11-20 09:57:08.074899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.532 [2024-11-20 09:57:08.075477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q1ZPp63qoV 00:18:34.792 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.051 [2024-11-20 09:57:08.384888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.051 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.051 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.309 [2024-11-20 09:57:08.765851] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.309 [2024-11-20 09:57:08.766044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.309 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.568 malloc0 00:18:35.568 09:57:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.869 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:35.869 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q1ZPp63qoV 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q1ZPp63qoV 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2673149 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2673149 /var/tmp/bdevperf.sock 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2673149 ']' 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.144 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.144 [2024-11-20 09:57:09.614710] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:36.144 [2024-11-20 09:57:09.614766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673149 ] 00:18:36.144 [2024-11-20 09:57:09.690629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.455 [2024-11-20 09:57:09.731782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.455 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.455 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.455 09:57:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:36.744 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.744 [2024-11-20 09:57:10.185643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.744 TLSTESTn1 00:18:36.744 09:57:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:37.003 Running I/O for 10 seconds... 00:18:38.872 5414.00 IOPS, 21.15 MiB/s [2024-11-20T08:57:13.389Z] 5499.00 IOPS, 21.48 MiB/s [2024-11-20T08:57:14.764Z] 5548.00 IOPS, 21.67 MiB/s [2024-11-20T08:57:15.700Z] 5566.00 IOPS, 21.74 MiB/s [2024-11-20T08:57:16.636Z] 5576.00 IOPS, 21.78 MiB/s [2024-11-20T08:57:17.573Z] 5579.83 IOPS, 21.80 MiB/s [2024-11-20T08:57:18.510Z] 5582.71 IOPS, 21.81 MiB/s [2024-11-20T08:57:19.446Z] 5583.25 IOPS, 21.81 MiB/s [2024-11-20T08:57:20.821Z] 5593.11 IOPS, 21.85 MiB/s [2024-11-20T08:57:20.821Z] 5584.00 IOPS, 21.81 MiB/s 00:18:47.239 Latency(us) 00:18:47.239 [2024-11-20T08:57:20.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.239 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.239 Verification LBA range: start 0x0 length 0x2000 00:18:47.239 TLSTESTn1 : 10.01 5589.13 21.83 0.00 0.00 22867.23 5898.24 23592.96 00:18:47.239 [2024-11-20T08:57:20.821Z] =================================================================================================================== 00:18:47.239 [2024-11-20T08:57:20.821Z] Total : 5589.13 21.83 0.00 0.00 22867.23 5898.24 23592.96 00:18:47.239 { 00:18:47.239 "results": [ 00:18:47.239 { 00:18:47.239 "job": "TLSTESTn1", 00:18:47.239 "core_mask": "0x4", 00:18:47.239 "workload": "verify", 00:18:47.239 "status": "finished", 00:18:47.239 "verify_range": { 00:18:47.239 "start": 0, 00:18:47.239 "length": 8192 00:18:47.239 }, 00:18:47.239 "queue_depth": 128, 00:18:47.239 "io_size": 4096, 00:18:47.239 "runtime": 10.013546, 00:18:47.239 "iops": 5589.128965902788, 00:18:47.239 "mibps": 21.832535023057765, 00:18:47.239 "io_failed": 0, 00:18:47.239 "io_timeout": 0, 00:18:47.239 "avg_latency_us": 22867.234115341776, 00:18:47.239 "min_latency_us": 5898.24, 00:18:47.239 "max_latency_us": 23592.96 00:18:47.239 } 00:18:47.239 ], 00:18:47.239 "core_count": 1 00:18:47.239 } 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2673149 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2673149 ']' 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2673149 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2673149 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2673149' 00:18:47.239 killing process with pid 2673149 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2673149 00:18:47.239 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.239 00:18:47.239 Latency(us) 00:18:47.239 [2024-11-20T08:57:20.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.239 [2024-11-20T08:57:20.821Z] =================================================================================================================== 00:18:47.239 [2024-11-20T08:57:20.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2673149 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Q1ZPp63qoV 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q1ZPp63qoV 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q1ZPp63qoV 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q1ZPp63qoV 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q1ZPp63qoV 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2674812 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2674812 /var/tmp/bdevperf.sock 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2674812 ']' 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.239 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.239 [2024-11-20 09:57:20.693543] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:47.239 [2024-11-20 09:57:20.693589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674812 ] 00:18:47.239 [2024-11-20 09:57:20.766966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.239 [2024-11-20 09:57:20.808143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.498 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.498 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.498 09:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:47.498 [2024-11-20 09:57:21.065685] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q1ZPp63qoV': 0100666 00:18:47.498 [2024-11-20 09:57:21.065709] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:47.498 request: 00:18:47.498 { 00:18:47.498 "name": "key0", 00:18:47.498 "path": "/tmp/tmp.Q1ZPp63qoV", 00:18:47.498 "method": "keyring_file_add_key", 00:18:47.498 "req_id": 1 00:18:47.498 } 00:18:47.498 Got JSON-RPC error response 00:18:47.498 response: 00:18:47.498 { 00:18:47.498 "code": -1, 00:18:47.498 "message": "Operation not permitted" 00:18:47.498 } 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.757 [2024-11-20 09:57:21.238236] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.757 [2024-11-20 09:57:21.238272] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:47.757 request: 00:18:47.757 { 00:18:47.757 "name": "TLSTEST", 00:18:47.757 "trtype": "tcp", 00:18:47.757 "traddr": "10.0.0.2", 00:18:47.757 "adrfam": "ipv4", 00:18:47.757 "trsvcid": "4420", 00:18:47.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.757 "prchk_reftag": false, 00:18:47.757 "prchk_guard": false, 00:18:47.757 "hdgst": false, 00:18:47.757 "ddgst": false, 00:18:47.757 "psk": "key0", 00:18:47.757 "allow_unrecognized_csi": false, 00:18:47.757 "method": "bdev_nvme_attach_controller", 00:18:47.757 "req_id": 1 00:18:47.757 } 00:18:47.757 Got JSON-RPC error response 00:18:47.757 response: 00:18:47.757 { 00:18:47.757 "code": -126, 00:18:47.757 "message": "Required key not available" 00:18:47.757 } 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2674812 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2674812 ']' 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2674812 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2674812 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2674812' 00:18:47.757 killing process with pid 2674812 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2674812 00:18:47.757 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.757 00:18:47.757 Latency(us) 00:18:47.757 [2024-11-20T08:57:21.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.757 [2024-11-20T08:57:21.339Z] =================================================================================================================== 00:18:47.757 [2024-11-20T08:57:21.339Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.757 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2674812 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2672754 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2672754 ']' 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2672754 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672754 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672754' 00:18:48.016 killing process with pid 2672754 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2672754 00:18:48.016 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2672754 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2675033 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2675033 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2675033 ']' 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.276 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.276 [2024-11-20 09:57:21.726744] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:48.276 [2024-11-20 09:57:21.726794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.276 [2024-11-20 09:57:21.802453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.276 [2024-11-20 09:57:21.838511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.276 [2024-11-20 09:57:21.838546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.276 [2024-11-20 09:57:21.838554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.276 [2024-11-20 09:57:21.838559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.276 [2024-11-20 09:57:21.838564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.276 [2024-11-20 09:57:21.839123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q1ZPp63qoV 00:18:48.536 09:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.795 [2024-11-20 09:57:22.150142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.795 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.053 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.053 [2024-11-20 09:57:22.551167] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.053 [2024-11-20 09:57:22.551418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.053 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.312 malloc0 00:18:49.312 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.570 09:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:49.570 [2024-11-20 09:57:23.108515] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q1ZPp63qoV': 0100666 00:18:49.570 [2024-11-20 09:57:23.108545] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:49.570 request: 00:18:49.570 { 00:18:49.570 "name": "key0", 00:18:49.570 "path": "/tmp/tmp.Q1ZPp63qoV", 00:18:49.570 "method": "keyring_file_add_key", 00:18:49.570 "req_id": 1 00:18:49.570 } 00:18:49.570 Got JSON-RPC error response 00:18:49.570 response: 00:18:49.570 { 00:18:49.570 "code": -1, 00:18:49.570 "message": "Operation not permitted" 00:18:49.570 } 00:18:49.570 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.829 [2024-11-20 09:57:23.301030] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:49.829 [2024-11-20 09:57:23.301060] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:49.829 request: 00:18:49.829 { 00:18:49.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.829 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.829 "psk": "key0", 00:18:49.829 "method": "nvmf_subsystem_add_host", 00:18:49.829 "req_id": 1 00:18:49.829 } 00:18:49.829 Got JSON-RPC error response 00:18:49.829 response: 00:18:49.829 { 00:18:49.829 "code": -32603, 00:18:49.829 "message": "Internal error" 00:18:49.829 } 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2675033 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2675033 ']' 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2675033 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.829 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675033 00:18:49.830 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.830 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.830 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675033' 00:18:49.830 killing process with pid 2675033 00:18:49.830 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2675033 00:18:49.830 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2675033 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Q1ZPp63qoV 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2675369 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2675369 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2675369 ']' 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.089 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.089 [2024-11-20 09:57:23.594692] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:50.089 [2024-11-20 09:57:23.594743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.348 [2024-11-20 09:57:23.674506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.348 [2024-11-20 09:57:23.716267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.348 [2024-11-20 09:57:23.716302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.348 [2024-11-20 09:57:23.716314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.348 [2024-11-20 09:57:23.716320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.348 [2024-11-20 09:57:23.716325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.348 [2024-11-20 09:57:23.716877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q1ZPp63qoV 00:18:50.348 09:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.607 [2024-11-20 09:57:24.020450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.607 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.866 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:50.866 [2024-11-20 09:57:24.401422] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.866 [2024-11-20 09:57:24.401646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.866 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.125 malloc0 00:18:51.125 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.385 09:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2675766 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2675766 /var/tmp/bdevperf.sock 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2675766 ']' 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.644 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.903 [2024-11-20 09:57:25.259498] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:51.903 [2024-11-20 09:57:25.259549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675766 ] 00:18:51.903 [2024-11-20 09:57:25.331217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.903 [2024-11-20 09:57:25.371105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.903 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.903 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.903 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:18:52.160 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.419 [2024-11-20 09:57:25.825509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.419 TLSTESTn1 00:18:52.419 09:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:52.677 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:52.677 "subsystems": [ 00:18:52.677 { 00:18:52.677 "subsystem": "keyring", 00:18:52.677 "config": [ 00:18:52.677 { 00:18:52.677 "method": "keyring_file_add_key", 00:18:52.677 "params": { 00:18:52.677 "name": "key0", 00:18:52.677 "path": "/tmp/tmp.Q1ZPp63qoV" 00:18:52.677 } 00:18:52.677 } 00:18:52.677 ] 00:18:52.677 }, 00:18:52.677 { 00:18:52.677 "subsystem": "iobuf", 00:18:52.677 "config": [ 00:18:52.677 { 00:18:52.677 "method": "iobuf_set_options", 00:18:52.677 "params": { 00:18:52.677 "small_pool_count": 8192, 00:18:52.677 "large_pool_count": 1024, 00:18:52.677 "small_bufsize": 8192, 00:18:52.677 "large_bufsize": 135168, 00:18:52.677 "enable_numa": false 00:18:52.677 } 00:18:52.677 } 00:18:52.677 ] 00:18:52.677 }, 00:18:52.677 { 00:18:52.677 "subsystem": "sock", 00:18:52.677 "config": [ 00:18:52.677 { 00:18:52.677 "method": "sock_set_default_impl", 00:18:52.677 "params": { 00:18:52.677 "impl_name": "posix" 00:18:52.677 } 00:18:52.677 }, 00:18:52.677 { 00:18:52.677 "method": "sock_impl_set_options", 00:18:52.677 "params": { 00:18:52.677 "impl_name": "ssl", 00:18:52.677 "recv_buf_size": 4096, 00:18:52.677 "send_buf_size": 4096, 00:18:52.677 "enable_recv_pipe": true, 00:18:52.677 "enable_quickack": false, 00:18:52.677 "enable_placement_id": 0, 00:18:52.677 "enable_zerocopy_send_server": true, 00:18:52.677 "enable_zerocopy_send_client": false, 00:18:52.677 "zerocopy_threshold": 0, 00:18:52.677 "tls_version": 0, 00:18:52.677 "enable_ktls": false 00:18:52.677 } 00:18:52.677 }, 00:18:52.677 { 00:18:52.677 "method": "sock_impl_set_options", 00:18:52.677 "params": { 00:18:52.677 "impl_name": "posix", 00:18:52.677 "recv_buf_size": 2097152, 00:18:52.678 "send_buf_size": 2097152, 00:18:52.678 "enable_recv_pipe": true, 00:18:52.678 "enable_quickack": false, 00:18:52.678 "enable_placement_id": 0, 00:18:52.678 "enable_zerocopy_send_server": true, 00:18:52.678 "enable_zerocopy_send_client": false, 00:18:52.678 "zerocopy_threshold": 0, 00:18:52.678 "tls_version": 0, 00:18:52.678 "enable_ktls": false 00:18:52.678 } 00:18:52.678 } 00:18:52.678 ] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "vmd", 00:18:52.678 "config": [] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "accel", 00:18:52.678 "config": [ 00:18:52.678 { 00:18:52.678 "method": "accel_set_options", 00:18:52.678 "params": { 00:18:52.678 "small_cache_size": 128, 00:18:52.678 "large_cache_size": 16, 00:18:52.678 "task_count": 2048, 00:18:52.678 "sequence_count": 2048, 00:18:52.678 "buf_count": 2048 00:18:52.678 } 00:18:52.678 } 00:18:52.678 ] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "bdev", 00:18:52.678 "config": [ 00:18:52.678 { 00:18:52.678 "method": "bdev_set_options", 00:18:52.678 "params": { 00:18:52.678 "bdev_io_pool_size": 65535, 00:18:52.678 "bdev_io_cache_size": 256, 00:18:52.678 "bdev_auto_examine": true, 00:18:52.678 "iobuf_small_cache_size": 128, 00:18:52.678 "iobuf_large_cache_size": 16 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_raid_set_options", 00:18:52.678 "params": { 00:18:52.678 "process_window_size_kb": 1024, 00:18:52.678 "process_max_bandwidth_mb_sec": 0 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_iscsi_set_options", 00:18:52.678 "params": { 00:18:52.678 "timeout_sec": 30 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_nvme_set_options", 00:18:52.678 "params": { 00:18:52.678 "action_on_timeout": "none", 00:18:52.678 "timeout_us": 0, 00:18:52.678 "timeout_admin_us": 0, 00:18:52.678 "keep_alive_timeout_ms": 10000, 00:18:52.678 "arbitration_burst": 0, 00:18:52.678 "low_priority_weight": 0, 00:18:52.678 "medium_priority_weight": 0, 00:18:52.678 "high_priority_weight": 0, 00:18:52.678 "nvme_adminq_poll_period_us": 10000, 00:18:52.678 "nvme_ioq_poll_period_us": 0, 00:18:52.678 "io_queue_requests": 0, 00:18:52.678 "delay_cmd_submit": true, 00:18:52.678 "transport_retry_count": 4, 00:18:52.678 "bdev_retry_count": 3, 00:18:52.678 "transport_ack_timeout": 0, 00:18:52.678 "ctrlr_loss_timeout_sec": 0, 00:18:52.678 "reconnect_delay_sec": 0, 00:18:52.678 "fast_io_fail_timeout_sec": 0, 00:18:52.678 "disable_auto_failback": false, 00:18:52.678 "generate_uuids": false, 00:18:52.678 "transport_tos": 0, 00:18:52.678 "nvme_error_stat": false, 00:18:52.678 "rdma_srq_size": 0, 00:18:52.678 "io_path_stat": false, 00:18:52.678 "allow_accel_sequence": false, 00:18:52.678 "rdma_max_cq_size": 0, 00:18:52.678 "rdma_cm_event_timeout_ms": 0, 00:18:52.678 "dhchap_digests": [ 00:18:52.678 "sha256", 00:18:52.678 "sha384", 00:18:52.678 "sha512" 00:18:52.678 ], 00:18:52.678 "dhchap_dhgroups": [ 00:18:52.678 "null", 00:18:52.678 "ffdhe2048", 00:18:52.678 "ffdhe3072", 00:18:52.678 "ffdhe4096", 00:18:52.678 "ffdhe6144", 00:18:52.678 "ffdhe8192" 00:18:52.678 ] 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_nvme_set_hotplug", 00:18:52.678 "params": { 00:18:52.678 "period_us": 100000, 00:18:52.678 "enable": false 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_malloc_create", 00:18:52.678 "params": { 00:18:52.678 "name": "malloc0", 00:18:52.678 "num_blocks": 8192, 00:18:52.678 "block_size": 4096, 00:18:52.678 "physical_block_size": 4096, 00:18:52.678 "uuid": "2fa0fc97-3ec2-4309-ac5c-aa0bc1c6efbd", 00:18:52.678 "optimal_io_boundary": 0, 00:18:52.678 "md_size": 0, 00:18:52.678 "dif_type": 0, 00:18:52.678 "dif_is_head_of_md": false, 00:18:52.678 "dif_pi_format": 0 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "bdev_wait_for_examine" 00:18:52.678 } 00:18:52.678 ] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "nbd", 00:18:52.678 "config": [] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "scheduler", 00:18:52.678 "config": [ 00:18:52.678 { 00:18:52.678 "method": "framework_set_scheduler", 00:18:52.678 "params": { 00:18:52.678 "name": "static" 00:18:52.678 } 00:18:52.678 } 00:18:52.678 ] 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "subsystem": "nvmf", 00:18:52.678 "config": [ 00:18:52.678 { 00:18:52.678 "method": "nvmf_set_config", 00:18:52.678 "params": { 00:18:52.678 "discovery_filter": "match_any", 00:18:52.678 "admin_cmd_passthru": { 00:18:52.678 "identify_ctrlr": false 00:18:52.678 }, 00:18:52.678 "dhchap_digests": [ 00:18:52.678 "sha256", 00:18:52.678 "sha384", 00:18:52.678 "sha512" 00:18:52.678 ], 00:18:52.678 "dhchap_dhgroups": [ 00:18:52.678 "null", 00:18:52.678 "ffdhe2048", 00:18:52.678 "ffdhe3072", 00:18:52.678 "ffdhe4096", 00:18:52.678 "ffdhe6144", 00:18:52.678 "ffdhe8192" 00:18:52.678 ] 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_set_max_subsystems", 00:18:52.678 "params": { 00:18:52.678 "max_subsystems": 1024 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_set_crdt", 00:18:52.678 "params": { 00:18:52.678 "crdt1": 0, 00:18:52.678 "crdt2": 0, 00:18:52.678 "crdt3": 0 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_create_transport", 00:18:52.678 "params": { 00:18:52.678 "trtype": "TCP", 00:18:52.678 "max_queue_depth": 128, 00:18:52.678 "max_io_qpairs_per_ctrlr": 127, 00:18:52.678 "in_capsule_data_size": 4096, 00:18:52.678 "max_io_size": 131072, 00:18:52.678 "io_unit_size": 131072, 00:18:52.678 "max_aq_depth": 128, 00:18:52.678 "num_shared_buffers": 511, 00:18:52.678 "buf_cache_size": 4294967295, 00:18:52.678 "dif_insert_or_strip": false, 00:18:52.678 "zcopy": false, 00:18:52.678 "c2h_success": false, 00:18:52.678 "sock_priority": 0, 00:18:52.678 "abort_timeout_sec": 1, 00:18:52.678 "ack_timeout": 0, 00:18:52.678 "data_wr_pool_size": 0 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_create_subsystem", 00:18:52.678 "params": { 00:18:52.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.678 "allow_any_host": false, 00:18:52.678 "serial_number": "SPDK00000000000001", 00:18:52.678 "model_number": "SPDK bdev Controller", 00:18:52.678 "max_namespaces": 10, 00:18:52.678 "min_cntlid": 1, 00:18:52.678 "max_cntlid": 65519, 00:18:52.678 "ana_reporting": false 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_subsystem_add_host", 00:18:52.678 "params": { 00:18:52.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.678 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.678 "psk": "key0" 00:18:52.678 } 00:18:52.678 }, 00:18:52.678 { 00:18:52.678 "method": "nvmf_subsystem_add_ns", 00:18:52.678 "params": { 00:18:52.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.678 "namespace": { 00:18:52.678 "nsid": 1, 00:18:52.678 "bdev_name": "malloc0", 00:18:52.678 "nguid": "2FA0FC973EC24309AC5CAA0BC1C6EFBD", 00:18:52.678 "uuid": "2fa0fc97-3ec2-4309-ac5c-aa0bc1c6efbd", 00:18:52.678 "no_auto_visible": false 00:18:52.678 } 00:18:52.679 } 00:18:52.679 }, 00:18:52.679 { 00:18:52.679 "method": "nvmf_subsystem_add_listener", 00:18:52.679 "params": { 00:18:52.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.679 "listen_address": { 00:18:52.679 "trtype": "TCP", 00:18:52.679 "adrfam": "IPv4", 00:18:52.679 "traddr": "10.0.0.2", 00:18:52.679 "trsvcid": "4420" 00:18:52.679 }, 00:18:52.679 "secure_channel": true 00:18:52.679 } 00:18:52.679 } 00:18:52.679 ] 00:18:52.679 } 00:18:52.679 ] 00:18:52.679 }' 00:18:52.679 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.938 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:52.938 "subsystems": [ 00:18:52.938 { 00:18:52.938 "subsystem": "keyring", 00:18:52.938 "config": [ 00:18:52.938 { 00:18:52.938 "method": "keyring_file_add_key", 00:18:52.938 "params": { 00:18:52.938 "name": "key0", 00:18:52.938 "path": "/tmp/tmp.Q1ZPp63qoV" 00:18:52.938 } 00:18:52.938 } 00:18:52.938 ] 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "subsystem": "iobuf", 00:18:52.938 "config": [ 00:18:52.938 { 00:18:52.938 "method": "iobuf_set_options", 00:18:52.938 "params": { 00:18:52.938 "small_pool_count": 8192, 00:18:52.938 "large_pool_count": 1024, 00:18:52.938 "small_bufsize": 8192, 00:18:52.938 "large_bufsize": 135168, 00:18:52.938 "enable_numa": false 00:18:52.938 } 00:18:52.938 } 00:18:52.938 ] 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "subsystem": "sock", 00:18:52.938 "config": [ 00:18:52.938 { 00:18:52.938 "method": "sock_set_default_impl", 00:18:52.938 "params": { 00:18:52.938 "impl_name": "posix" 00:18:52.938 } 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "method": "sock_impl_set_options", 00:18:52.938 "params": { 00:18:52.938 "impl_name": "ssl", 00:18:52.938 "recv_buf_size": 4096, 00:18:52.938 "send_buf_size": 4096, 00:18:52.938 "enable_recv_pipe": true, 00:18:52.938 "enable_quickack": false, 00:18:52.938 "enable_placement_id": 0, 00:18:52.938 "enable_zerocopy_send_server": true, 00:18:52.938 "enable_zerocopy_send_client": false, 00:18:52.938 "zerocopy_threshold": 0, 00:18:52.938 "tls_version": 0, 00:18:52.938 "enable_ktls": false 00:18:52.938 } 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "method": "sock_impl_set_options", 00:18:52.938 "params": { 00:18:52.938 "impl_name": "posix", 00:18:52.938 "recv_buf_size": 2097152, 00:18:52.938 "send_buf_size": 2097152, 00:18:52.938 "enable_recv_pipe": true, 00:18:52.938 "enable_quickack": false, 00:18:52.938 "enable_placement_id": 0, 00:18:52.938 "enable_zerocopy_send_server": true, 00:18:52.938 "enable_zerocopy_send_client": false, 00:18:52.938 "zerocopy_threshold": 0, 00:18:52.938 "tls_version": 0, 00:18:52.938 "enable_ktls": false 00:18:52.938 } 00:18:52.938 } 00:18:52.938 ] 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "subsystem": "vmd", 00:18:52.938 "config": [] 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "subsystem": "accel", 00:18:52.938 "config": [ 00:18:52.938 { 00:18:52.938 "method": "accel_set_options", 00:18:52.938 "params": { 00:18:52.938 "small_cache_size": 128, 00:18:52.938 "large_cache_size": 16, 00:18:52.938 "task_count": 2048, 00:18:52.938 "sequence_count": 2048, 00:18:52.938 "buf_count": 2048 00:18:52.938 } 00:18:52.938 } 00:18:52.938 ] 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "subsystem": "bdev", 00:18:52.938 "config": [ 00:18:52.938 { 00:18:52.938 "method": "bdev_set_options", 00:18:52.938 "params": { 00:18:52.938 "bdev_io_pool_size": 65535, 00:18:52.938 "bdev_io_cache_size": 256, 00:18:52.938 "bdev_auto_examine": true, 00:18:52.938 "iobuf_small_cache_size": 128, 00:18:52.938 "iobuf_large_cache_size": 16 00:18:52.938 } 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "method": "bdev_raid_set_options", 00:18:52.938 "params": { 00:18:52.938 "process_window_size_kb": 1024, 00:18:52.938 "process_max_bandwidth_mb_sec": 0 00:18:52.938 } 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "method": "bdev_iscsi_set_options", 00:18:52.938 "params": { 00:18:52.938 "timeout_sec": 30 00:18:52.938 } 00:18:52.938 }, 00:18:52.938 { 00:18:52.938 "method": "bdev_nvme_set_options", 00:18:52.938 "params": { 00:18:52.938 "action_on_timeout": "none", 00:18:52.938 "timeout_us": 0, 00:18:52.938 "timeout_admin_us": 0, 00:18:52.938 "keep_alive_timeout_ms": 10000, 00:18:52.938 "arbitration_burst": 0, 00:18:52.938 "low_priority_weight": 0, 00:18:52.938 "medium_priority_weight": 0, 00:18:52.938 "high_priority_weight": 0, 00:18:52.938 "nvme_adminq_poll_period_us": 10000, 00:18:52.938 "nvme_ioq_poll_period_us": 0, 00:18:52.938 "io_queue_requests": 512, 00:18:52.938 "delay_cmd_submit": true, 00:18:52.938 "transport_retry_count": 4, 00:18:52.938 "bdev_retry_count": 3, 00:18:52.938 "transport_ack_timeout": 0, 00:18:52.938 "ctrlr_loss_timeout_sec": 0, 00:18:52.938 "reconnect_delay_sec": 0, 00:18:52.938 "fast_io_fail_timeout_sec": 0, 00:18:52.938 "disable_auto_failback": false, 00:18:52.938 "generate_uuids": false, 00:18:52.938 "transport_tos": 0, 00:18:52.938 "nvme_error_stat": false, 00:18:52.938 "rdma_srq_size": 0, 00:18:52.938 "io_path_stat": false, 00:18:52.938 "allow_accel_sequence": false, 00:18:52.938 "rdma_max_cq_size": 0, 00:18:52.938 "rdma_cm_event_timeout_ms": 0, 00:18:52.938 "dhchap_digests": [ 00:18:52.938 "sha256", 00:18:52.938 "sha384", 00:18:52.938 "sha512" 00:18:52.938 ], 00:18:52.938 "dhchap_dhgroups": [ 00:18:52.938 "null", 00:18:52.939 "ffdhe2048", 00:18:52.939 "ffdhe3072", 00:18:52.939 "ffdhe4096", 00:18:52.939 "ffdhe6144", 00:18:52.939 "ffdhe8192" 00:18:52.939 ] 00:18:52.939 } 00:18:52.939 }, 00:18:52.939 { 00:18:52.939 "method": "bdev_nvme_attach_controller", 00:18:52.939 "params": { 00:18:52.939 "name": "TLSTEST", 00:18:52.939 "trtype": "TCP", 00:18:52.939 "adrfam": "IPv4", 00:18:52.939 "traddr": "10.0.0.2", 00:18:52.939 "trsvcid": "4420", 00:18:52.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.939 "prchk_reftag": false, 00:18:52.939 "prchk_guard": false, 00:18:52.939 "ctrlr_loss_timeout_sec": 0, 00:18:52.939 "reconnect_delay_sec": 0, 00:18:52.939 "fast_io_fail_timeout_sec": 0, 00:18:52.939 "psk": "key0", 00:18:52.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.939 "hdgst": false, 00:18:52.939 "ddgst": false, 00:18:52.939 "multipath": "multipath" 00:18:52.939 } 00:18:52.939 }, 00:18:52.939 { 00:18:52.939 "method": "bdev_nvme_set_hotplug", 00:18:52.939 "params": { 00:18:52.939 "period_us": 100000, 00:18:52.939 "enable": false 00:18:52.939 } 00:18:52.939 }, 00:18:52.939 { 00:18:52.939 "method": "bdev_wait_for_examine" 00:18:52.939 } 00:18:52.939 ] 00:18:52.939 }, 00:18:52.939 { 00:18:52.939 "subsystem": "nbd", 00:18:52.939 "config": [] 00:18:52.939 } 00:18:52.939 ] 00:18:52.939 }' 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2675766 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2675766 ']' 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2675766 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675766 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675766' 00:18:52.939 killing process with pid 2675766 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2675766 00:18:52.939 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.939 00:18:52.939 Latency(us) 00:18:52.939 [2024-11-20T08:57:26.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.939 [2024-11-20T08:57:26.521Z] =================================================================================================================== 00:18:52.939 [2024-11-20T08:57:26.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.939 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2675766 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2675369 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2675369 ']' 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2675369 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675369 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675369' 00:18:53.198 killing process with pid 2675369 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2675369 00:18:53.198 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2675369 00:18:53.458 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:53.458 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.458 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.458 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:53.458 "subsystems": [ 00:18:53.458 { 00:18:53.458 "subsystem": "keyring", 00:18:53.458 "config": [ 00:18:53.458 { 00:18:53.458 "method": "keyring_file_add_key", 00:18:53.458 "params": { 00:18:53.459 "name": "key0", 00:18:53.459 "path": "/tmp/tmp.Q1ZPp63qoV" 00:18:53.459 } 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "iobuf", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "iobuf_set_options", 00:18:53.459 "params": { 00:18:53.459 "small_pool_count": 8192, 00:18:53.459 "large_pool_count": 1024, 00:18:53.459 "small_bufsize": 8192, 00:18:53.459 "large_bufsize": 135168, 00:18:53.459 "enable_numa": false 00:18:53.459 } 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "sock", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "sock_set_default_impl", 00:18:53.459 "params": { 00:18:53.459 "impl_name": "posix" 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "sock_impl_set_options", 00:18:53.459 "params": { 00:18:53.459 "impl_name": "ssl", 00:18:53.459 "recv_buf_size": 4096, 00:18:53.459 "send_buf_size": 4096, 00:18:53.459 "enable_recv_pipe": true, 00:18:53.459 "enable_quickack": false, 00:18:53.459 "enable_placement_id": 0, 00:18:53.459 "enable_zerocopy_send_server": true, 00:18:53.459 "enable_zerocopy_send_client": false, 00:18:53.459 "zerocopy_threshold": 0, 00:18:53.459 "tls_version": 0, 00:18:53.459 "enable_ktls": false 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "sock_impl_set_options", 00:18:53.459 "params": { 00:18:53.459 "impl_name": "posix", 00:18:53.459 "recv_buf_size": 2097152, 00:18:53.459 "send_buf_size": 2097152, 00:18:53.459 "enable_recv_pipe": true, 00:18:53.459 "enable_quickack": false, 00:18:53.459 "enable_placement_id": 0, 00:18:53.459 "enable_zerocopy_send_server": true, 00:18:53.459 "enable_zerocopy_send_client": false, 00:18:53.459 "zerocopy_threshold": 0, 00:18:53.459 "tls_version": 0, 00:18:53.459 "enable_ktls": false 00:18:53.459 } 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "vmd", 00:18:53.459 "config": [] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "accel", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "accel_set_options", 00:18:53.459 "params": { 00:18:53.459 "small_cache_size": 128, 00:18:53.459 "large_cache_size": 16, 00:18:53.459 "task_count": 2048, 00:18:53.459 "sequence_count": 2048, 00:18:53.459 "buf_count": 2048 00:18:53.459 } 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "bdev", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "bdev_set_options", 00:18:53.459 "params": { 00:18:53.459 "bdev_io_pool_size": 65535, 00:18:53.459 "bdev_io_cache_size": 256, 00:18:53.459 "bdev_auto_examine": true, 00:18:53.459 "iobuf_small_cache_size": 128, 00:18:53.459 "iobuf_large_cache_size": 16 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_raid_set_options", 00:18:53.459 "params": { 00:18:53.459 "process_window_size_kb": 1024, 00:18:53.459 "process_max_bandwidth_mb_sec": 0 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_iscsi_set_options", 00:18:53.459 "params": { 00:18:53.459 "timeout_sec": 30 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_nvme_set_options", 00:18:53.459 "params": { 00:18:53.459 "action_on_timeout": "none", 00:18:53.459 "timeout_us": 0, 00:18:53.459 "timeout_admin_us": 0, 00:18:53.459 "keep_alive_timeout_ms": 10000, 00:18:53.459 "arbitration_burst": 0, 00:18:53.459 "low_priority_weight": 0, 00:18:53.459 "medium_priority_weight": 0, 00:18:53.459 "high_priority_weight": 0, 00:18:53.459 "nvme_adminq_poll_period_us": 10000, 00:18:53.459 "nvme_ioq_poll_period_us": 0, 00:18:53.459 "io_queue_requests": 0, 00:18:53.459 "delay_cmd_submit": true, 00:18:53.459 "transport_retry_count": 4, 00:18:53.459 "bdev_retry_count": 3, 00:18:53.459 "transport_ack_timeout": 0, 00:18:53.459 "ctrlr_loss_timeout_sec": 0, 00:18:53.459 "reconnect_delay_sec": 0, 00:18:53.459 "fast_io_fail_timeout_sec": 0, 00:18:53.459 "disable_auto_failback": false, 00:18:53.459 "generate_uuids": false, 00:18:53.459 "transport_tos": 0, 00:18:53.459 "nvme_error_stat": false, 00:18:53.459 "rdma_srq_size": 0, 00:18:53.459 "io_path_stat": false, 00:18:53.459 "allow_accel_sequence": false, 00:18:53.459 "rdma_max_cq_size": 0, 00:18:53.459 "rdma_cm_event_timeout_ms": 0, 00:18:53.459 "dhchap_digests": [ 00:18:53.459 "sha256", 00:18:53.459 "sha384", 00:18:53.459 "sha512" 00:18:53.459 ], 00:18:53.459 "dhchap_dhgroups": [ 00:18:53.459 "null", 00:18:53.459 "ffdhe2048", 00:18:53.459 "ffdhe3072", 00:18:53.459 "ffdhe4096", 00:18:53.459 "ffdhe6144", 00:18:53.459 "ffdhe8192" 00:18:53.459 ] 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_nvme_set_hotplug", 00:18:53.459 "params": { 00:18:53.459 "period_us": 100000, 00:18:53.459 "enable": false 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_malloc_create", 00:18:53.459 "params": { 00:18:53.459 "name": "malloc0", 00:18:53.459 "num_blocks": 8192, 00:18:53.459 "block_size": 4096, 00:18:53.459 "physical_block_size": 4096, 00:18:53.459 "uuid": "2fa0fc97-3ec2-4309-ac5c-aa0bc1c6efbd", 00:18:53.459 "optimal_io_boundary": 0, 00:18:53.459 "md_size": 0, 00:18:53.459 "dif_type": 0, 00:18:53.459 "dif_is_head_of_md": false, 00:18:53.459 "dif_pi_format": 0 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "bdev_wait_for_examine" 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "nbd", 00:18:53.459 "config": [] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "scheduler", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "framework_set_scheduler", 00:18:53.459 "params": { 00:18:53.459 "name": "static" 00:18:53.459 } 00:18:53.459 } 00:18:53.459 ] 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "subsystem": "nvmf", 00:18:53.459 "config": [ 00:18:53.459 { 00:18:53.459 "method": "nvmf_set_config", 00:18:53.459 "params": { 00:18:53.459 "discovery_filter": "match_any", 00:18:53.459 "admin_cmd_passthru": { 00:18:53.459 "identify_ctrlr": false 00:18:53.459 }, 00:18:53.459 "dhchap_digests": [ 00:18:53.459 "sha256", 00:18:53.459 "sha384", 00:18:53.459 "sha512" 00:18:53.459 ], 00:18:53.459 "dhchap_dhgroups": [ 00:18:53.459 "null", 00:18:53.459 "ffdhe2048", 00:18:53.459 "ffdhe3072", 00:18:53.459 "ffdhe4096", 00:18:53.459 "ffdhe6144", 00:18:53.459 "ffdhe8192" 00:18:53.459 ] 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "nvmf_set_max_subsystems", 00:18:53.459 "params": { 00:18:53.459 "max_subsystems": 1024 00:18:53.459 } 00:18:53.459 }, 00:18:53.459 { 00:18:53.459 "method": "nvmf_set_crdt", 00:18:53.459 "params": { 00:18:53.459 "crdt1": 0, 00:18:53.459 "crdt2": 0, 00:18:53.459 "crdt3": 0 00:18:53.459 } 00:18:53.459 }, 00:18:53.460 { 00:18:53.460 "method": "nvmf_create_transport", 00:18:53.460 "params": { 00:18:53.460 "trtype": "TCP", 00:18:53.460 "max_queue_depth": 128, 00:18:53.460 "max_io_qpairs_per_ctrlr": 127, 00:18:53.460 "in_capsule_data_size": 4096, 00:18:53.460 "max_io_size": 131072, 00:18:53.460 "io_unit_size": 131072, 00:18:53.460 "max_aq_depth": 128, 00:18:53.460 "num_shared_buffers": 511, 00:18:53.460 "buf_cache_size": 4294967295, 00:18:53.460 "dif_insert_or_strip": false, 00:18:53.460 "zcopy": false, 00:18:53.460 "c2h_success": false, 00:18:53.460 "sock_priority": 0, 00:18:53.460 "abort_timeout_sec": 1, 00:18:53.460 "ack_timeout": 0, 00:18:53.460 "data_wr_pool_size": 0 00:18:53.460 } 00:18:53.460 }, 00:18:53.460 { 00:18:53.460 "method": "nvmf_create_subsystem", 00:18:53.460 "params": { 00:18:53.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.460 "allow_any_host": false, 00:18:53.460 "serial_number": "SPDK00000000000001", 00:18:53.460 "model_number": "SPDK bdev Controller", 00:18:53.460 "max_namespaces": 10, 00:18:53.460 "min_cntlid": 1, 00:18:53.460 "max_cntlid": 65519, 00:18:53.460 "ana_reporting": false 00:18:53.460 } 00:18:53.460 }, 00:18:53.460 { 00:18:53.460 "method": "nvmf_subsystem_add_host", 00:18:53.460 "params": { 00:18:53.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.460 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.460 "psk": "key0" 00:18:53.460 } 00:18:53.460 }, 00:18:53.460 { 00:18:53.460 "method": "nvmf_subsystem_add_ns", 00:18:53.460 "params": { 00:18:53.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.460 "namespace": { 00:18:53.460 "nsid": 1, 00:18:53.460 "bdev_name": "malloc0", 00:18:53.460 "nguid": "2FA0FC973EC24309AC5CAA0BC1C6EFBD", 00:18:53.460 "uuid": "2fa0fc97-3ec2-4309-ac5c-aa0bc1c6efbd", 00:18:53.460 "no_auto_visible": false 00:18:53.460 } 00:18:53.460 } 00:18:53.460 }, 00:18:53.460 { 00:18:53.460 "method": "nvmf_subsystem_add_listener", 00:18:53.460 "params": { 00:18:53.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.460 "listen_address": { 00:18:53.460 "trtype": "TCP", 00:18:53.460 "adrfam": "IPv4", 00:18:53.460 "traddr": "10.0.0.2", 00:18:53.460 "trsvcid": "4420" 00:18:53.460 }, 00:18:53.460 "secure_channel": true 00:18:53.460 } 00:18:53.460 } 00:18:53.460 ] 00:18:53.460 } 00:18:53.460 ] 00:18:53.460 }' 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2676022 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2676022 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2676022 ']' 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.460 09:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.460 [2024-11-20 09:57:26.938092] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:53.460 [2024-11-20 09:57:26.938138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.460 [2024-11-20 09:57:27.010933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.719 [2024-11-20 09:57:27.052139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.719 [2024-11-20 09:57:27.052169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.719 [2024-11-20 09:57:27.052176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.719 [2024-11-20 09:57:27.052182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.719 [2024-11-20 09:57:27.052187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.719 [2024-11-20 09:57:27.052771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.719 [2024-11-20 09:57:27.264998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.719 [2024-11-20 09:57:27.297033] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.719 [2024-11-20 09:57:27.297279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.287 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.287 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.287 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.287 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.287 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2676129 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2676129 /var/tmp/bdevperf.sock 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2676129 ']' 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.288 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:54.288 "subsystems": [ 00:18:54.288 { 00:18:54.288 "subsystem": "keyring", 00:18:54.288 "config": [ 00:18:54.288 { 00:18:54.288 "method": "keyring_file_add_key", 00:18:54.288 "params": { 00:18:54.288 "name": "key0", 00:18:54.288 "path": "/tmp/tmp.Q1ZPp63qoV" 00:18:54.288 } 00:18:54.288 } 00:18:54.288 ] 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "subsystem": "iobuf", 00:18:54.288 "config": [ 00:18:54.288 { 00:18:54.288 "method": "iobuf_set_options", 00:18:54.288 "params": { 00:18:54.288 "small_pool_count": 8192, 00:18:54.288 "large_pool_count": 1024, 00:18:54.288 "small_bufsize": 8192, 00:18:54.288 "large_bufsize": 135168, 00:18:54.288 "enable_numa": false 00:18:54.288 } 00:18:54.288 } 00:18:54.288 ] 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "subsystem": "sock", 00:18:54.288 "config": [ 00:18:54.288 { 00:18:54.288 "method": "sock_set_default_impl", 00:18:54.288 "params": { 00:18:54.288 "impl_name": "posix" 00:18:54.288 } 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "method": "sock_impl_set_options", 00:18:54.288 "params": { 00:18:54.288 "impl_name": "ssl", 00:18:54.288 "recv_buf_size": 4096, 00:18:54.288 "send_buf_size": 4096, 00:18:54.288 "enable_recv_pipe": true, 00:18:54.288 "enable_quickack": false, 00:18:54.288 "enable_placement_id": 0, 00:18:54.288 "enable_zerocopy_send_server": true, 00:18:54.288 "enable_zerocopy_send_client": false, 00:18:54.288 "zerocopy_threshold": 0, 00:18:54.288 "tls_version": 0, 00:18:54.288 "enable_ktls": false 00:18:54.288 } 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "method": "sock_impl_set_options", 00:18:54.288 "params": { 00:18:54.288 "impl_name": "posix", 00:18:54.288 "recv_buf_size": 2097152, 00:18:54.288 "send_buf_size": 2097152, 00:18:54.288 "enable_recv_pipe": true, 00:18:54.288 "enable_quickack": false, 00:18:54.288 "enable_placement_id": 0, 00:18:54.288 "enable_zerocopy_send_server": true, 00:18:54.288 "enable_zerocopy_send_client": false, 00:18:54.288 "zerocopy_threshold": 0, 00:18:54.288 "tls_version": 0, 00:18:54.288 "enable_ktls": false 00:18:54.288 } 00:18:54.288 } 00:18:54.288 ] 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "subsystem": "vmd", 00:18:54.288 "config": [] 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "subsystem": "accel", 00:18:54.288 "config": [ 00:18:54.288 { 00:18:54.288 "method": "accel_set_options", 00:18:54.288 "params": { 00:18:54.288 "small_cache_size": 128, 00:18:54.288 "large_cache_size": 16, 00:18:54.288 "task_count": 2048, 00:18:54.288 "sequence_count": 2048, 00:18:54.288 "buf_count": 2048 00:18:54.288 } 00:18:54.288 } 00:18:54.288 ] 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "subsystem": "bdev", 00:18:54.288 "config": [ 00:18:54.288 { 00:18:54.288 "method": "bdev_set_options", 00:18:54.288 "params": { 00:18:54.288 "bdev_io_pool_size": 65535, 00:18:54.288 "bdev_io_cache_size": 256, 00:18:54.288 "bdev_auto_examine": true, 00:18:54.288 "iobuf_small_cache_size": 128, 00:18:54.288 "iobuf_large_cache_size": 16 00:18:54.288 } 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "method": "bdev_raid_set_options", 00:18:54.288 "params": { 00:18:54.288 "process_window_size_kb": 1024, 00:18:54.288 "process_max_bandwidth_mb_sec": 0 00:18:54.288 } 00:18:54.288 }, 00:18:54.288 { 00:18:54.288 "method": "bdev_iscsi_set_options", 00:18:54.288 "params": { 00:18:54.288 "timeout_sec": 30 00:18:54.288 } 00:18:54.288 }, 00:18:54.288 { 00:18:54.289 "method": "bdev_nvme_set_options", 00:18:54.289 "params": { 00:18:54.289 "action_on_timeout": "none", 00:18:54.289 "timeout_us": 0, 00:18:54.289 "timeout_admin_us": 0, 00:18:54.289 "keep_alive_timeout_ms": 10000, 00:18:54.289 "arbitration_burst": 0, 00:18:54.289 "low_priority_weight": 0, 00:18:54.289 "medium_priority_weight": 0, 00:18:54.289 "high_priority_weight": 0, 00:18:54.289 "nvme_adminq_poll_period_us": 10000, 00:18:54.289 "nvme_ioq_poll_period_us": 0, 00:18:54.289 "io_queue_requests": 512, 00:18:54.289 "delay_cmd_submit": true, 00:18:54.289 "transport_retry_count": 4, 00:18:54.289 "bdev_retry_count": 3, 00:18:54.289 "transport_ack_timeout": 0, 00:18:54.289 "ctrlr_loss_timeout_sec": 0, 00:18:54.289 "reconnect_delay_sec": 0, 00:18:54.289 "fast_io_fail_timeout_sec": 0, 00:18:54.289 "disable_auto_failback": false, 00:18:54.289 "generate_uuids": false, 00:18:54.289 "transport_tos": 0, 00:18:54.289 "nvme_error_stat": false, 00:18:54.289 "rdma_srq_size": 0, 00:18:54.289 "io_path_stat": false, 00:18:54.289 "allow_accel_sequence": false, 00:18:54.289 "rdma_max_cq_size": 0, 00:18:54.289 "rdma_cm_event_timeout_ms": 0, 00:18:54.289 "dhchap_digests": [ 00:18:54.289 "sha256", 00:18:54.289 "sha384", 00:18:54.289 "sha512" 00:18:54.289 ], 00:18:54.289 "dhchap_dhgroups": [ 00:18:54.289 "null", 00:18:54.289 "ffdhe2048", 00:18:54.289 "ffdhe3072", 00:18:54.289 "ffdhe4096", 00:18:54.289 "ffdhe6144", 00:18:54.289 "ffdhe8192" 00:18:54.289 ] 00:18:54.289 } 00:18:54.289 }, 00:18:54.289 { 00:18:54.289 "method": "bdev_nvme_attach_controller", 00:18:54.289 "params": { 00:18:54.289 "name": "TLSTEST", 00:18:54.289 "trtype": "TCP", 00:18:54.289 "adrfam": "IPv4", 00:18:54.289 "traddr": "10.0.0.2", 00:18:54.289 "trsvcid": "4420", 00:18:54.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.289 "prchk_reftag": false, 00:18:54.289 "prchk_guard": false, 00:18:54.289 "ctrlr_loss_timeout_sec": 0, 00:18:54.289 "reconnect_delay_sec": 0, 00:18:54.289 "fast_io_fail_timeout_sec": 0, 00:18:54.289 "psk": "key0", 00:18:54.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.289 "hdgst": false, 00:18:54.289 "ddgst": false, 00:18:54.289 "multipath": "multipath" 00:18:54.289 } 00:18:54.289 }, 00:18:54.289 { 00:18:54.289 "method": "bdev_nvme_set_hotplug", 00:18:54.289 "params": { 00:18:54.289 "period_us": 100000, 00:18:54.289 "enable": false 00:18:54.289 } 00:18:54.289 }, 00:18:54.289 { 00:18:54.289 "method": "bdev_wait_for_examine" 00:18:54.289 } 00:18:54.289 ] 00:18:54.289 }, 00:18:54.289 { 00:18:54.289 "subsystem": "nbd", 00:18:54.289 "config": [] 00:18:54.289 } 00:18:54.289 ] 00:18:54.289 }' 00:18:54.289 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.289 09:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.289 [2024-11-20 09:57:27.846477] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:18:54.289 [2024-11-20 09:57:27.846536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676129 ] 00:18:54.547 [2024-11-20 09:57:27.920976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.547 [2024-11-20 09:57:27.962572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.547 [2024-11-20 09:57:28.113035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.112 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.112 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.112 09:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.370 Running I/O for 10 seconds... 00:18:57.238 5486.00 IOPS, 21.43 MiB/s [2024-11-20T08:57:32.195Z] 5519.50 IOPS, 21.56 MiB/s [2024-11-20T08:57:33.129Z] 5569.33 IOPS, 21.76 MiB/s [2024-11-20T08:57:34.063Z] 5598.25 IOPS, 21.87 MiB/s [2024-11-20T08:57:34.998Z] 5585.00 IOPS, 21.82 MiB/s [2024-11-20T08:57:35.933Z] 5551.50 IOPS, 21.69 MiB/s [2024-11-20T08:57:36.866Z] 5573.71 IOPS, 21.77 MiB/s [2024-11-20T08:57:37.798Z] 5581.75 IOPS, 21.80 MiB/s [2024-11-20T08:57:39.171Z] 5590.33 IOPS, 21.84 MiB/s [2024-11-20T08:57:39.171Z] 5589.40 IOPS, 21.83 MiB/s 00:19:05.589 Latency(us) 00:19:05.589 [2024-11-20T08:57:39.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.589 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.589 Verification LBA range: start 0x0 length 0x2000 00:19:05.589 TLSTESTn1 : 10.02 5592.96 21.85 0.00 0.00 22850.72 4899.60 22344.66 00:19:05.589 [2024-11-20T08:57:39.171Z] =================================================================================================================== 00:19:05.589 [2024-11-20T08:57:39.171Z] Total : 5592.96 21.85 0.00 0.00 22850.72 4899.60 22344.66 00:19:05.589 { 00:19:05.589 "results": [ 00:19:05.589 { 00:19:05.589 "job": "TLSTESTn1", 00:19:05.589 "core_mask": "0x4", 00:19:05.589 "workload": "verify", 00:19:05.589 "status": "finished", 00:19:05.589 "verify_range": { 00:19:05.589 "start": 0, 00:19:05.589 "length": 8192 00:19:05.589 }, 00:19:05.589 "queue_depth": 128, 00:19:05.589 "io_size": 4096, 00:19:05.589 "runtime": 10.016345, 00:19:05.589 "iops": 5592.958309642889, 00:19:05.589 "mibps": 21.847493397042534, 00:19:05.589 "io_failed": 0, 00:19:05.589 "io_timeout": 0, 00:19:05.589 "avg_latency_us": 22850.72409628702, 00:19:05.589 "min_latency_us": 4899.596190476191, 00:19:05.589 "max_latency_us": 22344.655238095238 00:19:05.589 } 00:19:05.589 ], 00:19:05.589 "core_count": 1 00:19:05.589 } 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2676129 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2676129 ']' 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2676129 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676129 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676129' 00:19:05.589 killing process with pid 2676129 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2676129 00:19:05.589 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.589 00:19:05.589 Latency(us) 00:19:05.589 [2024-11-20T08:57:39.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.589 [2024-11-20T08:57:39.171Z] =================================================================================================================== 00:19:05.589 [2024-11-20T08:57:39.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.589 09:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2676129 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2676022 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2676022 ']' 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2676022 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676022 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.589 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676022' 00:19:05.589 killing process with pid 2676022 00:19:05.590 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2676022 00:19:05.590 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2676022 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2678021 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2678021 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678021 ']' 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.849 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 [2024-11-20 09:57:39.301606] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:05.849 [2024-11-20 09:57:39.301651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.849 [2024-11-20 09:57:39.380610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.849 [2024-11-20 09:57:39.421444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.849 [2024-11-20 09:57:39.421481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.849 [2024-11-20 09:57:39.421488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.849 [2024-11-20 09:57:39.421494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.849 [2024-11-20 09:57:39.421499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.849 [2024-11-20 09:57:39.422058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Q1ZPp63qoV 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q1ZPp63qoV 00:19:06.107 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.366 [2024-11-20 09:57:39.717128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.366 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.624 09:57:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:06.624 [2024-11-20 09:57:40.114165] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.624 [2024-11-20 09:57:40.114384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.625 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.883 malloc0 00:19:06.883 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.143 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2678368 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2678368 /var/tmp/bdevperf.sock 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678368 ']' 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.402 09:57:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.661 [2024-11-20 09:57:40.990268] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:07.662 [2024-11-20 09:57:40.990321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678368 ] 00:19:07.662 [2024-11-20 09:57:41.066840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.662 [2024-11-20 09:57:41.107077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.662 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.662 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.662 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:19:07.920 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:08.179 [2024-11-20 09:57:41.566061] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.179 nvme0n1 00:19:08.179 09:57:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.179 Running I/O for 1 seconds... 00:19:09.556 5216.00 IOPS, 20.38 MiB/s 00:19:09.556 Latency(us) 00:19:09.556 [2024-11-20T08:57:43.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.556 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:09.556 Verification LBA range: start 0x0 length 0x2000 00:19:09.556 nvme0n1 : 1.02 5258.24 20.54 0.00 0.00 24154.31 6459.98 31956.60 00:19:09.556 [2024-11-20T08:57:43.138Z] =================================================================================================================== 00:19:09.556 [2024-11-20T08:57:43.138Z] Total : 5258.24 20.54 0.00 0.00 24154.31 6459.98 31956.60 00:19:09.556 { 00:19:09.556 "results": [ 00:19:09.556 { 00:19:09.556 "job": "nvme0n1", 00:19:09.556 "core_mask": "0x2", 00:19:09.556 "workload": "verify", 00:19:09.556 "status": "finished", 00:19:09.556 "verify_range": { 00:19:09.556 "start": 0, 00:19:09.556 "length": 8192 00:19:09.556 }, 00:19:09.556 "queue_depth": 128, 00:19:09.556 "io_size": 4096, 00:19:09.556 "runtime": 1.016309, 00:19:09.556 "iops": 5258.243309859501, 00:19:09.556 "mibps": 20.540012929138676, 00:19:09.556 "io_failed": 0, 00:19:09.556 "io_timeout": 0, 00:19:09.556 "avg_latency_us": 24154.311719418307, 00:19:09.556 "min_latency_us": 6459.977142857143, 00:19:09.556 "max_latency_us": 31956.601904761905 00:19:09.556 } 00:19:09.556 ], 00:19:09.556 "core_count": 1 00:19:09.556 } 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2678368 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678368 ']' 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678368 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678368 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678368' 00:19:09.556 killing process with pid 2678368 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678368 00:19:09.556 Received shutdown signal, test time was about 1.000000 seconds 00:19:09.556 00:19:09.556 Latency(us) 00:19:09.556 [2024-11-20T08:57:43.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.556 [2024-11-20T08:57:43.138Z] =================================================================================================================== 00:19:09.556 [2024-11-20T08:57:43.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678368 00:19:09.556 09:57:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2678021 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678021 ']' 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678021 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678021 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678021' 00:19:09.556 killing process with pid 2678021 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678021 00:19:09.556 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678021 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2678627 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2678627 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678627 ']' 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.816 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.816 [2024-11-20 09:57:43.274692] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:09.816 [2024-11-20 09:57:43.274744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.816 [2024-11-20 09:57:43.355060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.816 [2024-11-20 09:57:43.393538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.816 [2024-11-20 09:57:43.393573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.816 [2024-11-20 09:57:43.393580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.816 [2024-11-20 09:57:43.393586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.816 [2024-11-20 09:57:43.393591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.816 [2024-11-20 09:57:43.394153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.075 [2024-11-20 09:57:43.540974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.075 malloc0 00:19:10.075 [2024-11-20 09:57:43.569170] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.075 [2024-11-20 09:57:43.569402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2678809 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2678809 /var/tmp/bdevperf.sock 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2678809 ']' 00:19:10.075 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.076 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.076 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.076 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.076 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.076 [2024-11-20 09:57:43.645428] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:10.076 [2024-11-20 09:57:43.645468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678809 ] 00:19:10.336 [2024-11-20 09:57:43.718733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.336 [2024-11-20 09:57:43.758942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.336 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.336 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.336 09:57:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q1ZPp63qoV 00:19:10.595 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:10.855 [2024-11-20 09:57:44.226142] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.855 nvme0n1 00:19:10.855 09:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.855 Running I/O for 1 seconds... 00:19:12.233 5469.00 IOPS, 21.36 MiB/s 00:19:12.233 Latency(us) 00:19:12.233 [2024-11-20T08:57:45.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.233 Verification LBA range: start 0x0 length 0x2000 00:19:12.233 nvme0n1 : 1.02 5507.66 21.51 0.00 0.00 23075.47 4868.39 42192.70 00:19:12.233 [2024-11-20T08:57:45.815Z] =================================================================================================================== 00:19:12.233 [2024-11-20T08:57:45.815Z] Total : 5507.66 21.51 0.00 0.00 23075.47 4868.39 42192.70 00:19:12.233 { 00:19:12.233 "results": [ 00:19:12.233 { 00:19:12.233 "job": "nvme0n1", 00:19:12.233 "core_mask": "0x2", 00:19:12.233 "workload": "verify", 00:19:12.233 "status": "finished", 00:19:12.233 "verify_range": { 00:19:12.233 "start": 0, 00:19:12.233 "length": 8192 00:19:12.233 }, 00:19:12.233 "queue_depth": 128, 00:19:12.233 "io_size": 4096, 00:19:12.233 "runtime": 1.016402, 00:19:12.233 "iops": 5507.6633064476455, 00:19:12.233 "mibps": 21.514309790811115, 00:19:12.233 "io_failed": 0, 00:19:12.233 "io_timeout": 0, 00:19:12.233 "avg_latency_us": 23075.469886864357, 00:19:12.233 "min_latency_us": 4868.388571428572, 00:19:12.233 "max_latency_us": 42192.700952380954 00:19:12.233 } 00:19:12.233 ], 00:19:12.233 "core_count": 1 00:19:12.233 } 00:19:12.233 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:12.233 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.233 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.233 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.233 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:12.233 "subsystems": [ 00:19:12.233 { 00:19:12.233 "subsystem": "keyring", 00:19:12.233 "config": [ 00:19:12.233 { 00:19:12.233 "method": "keyring_file_add_key", 00:19:12.233 "params": { 00:19:12.233 "name": "key0", 00:19:12.233 "path": "/tmp/tmp.Q1ZPp63qoV" 00:19:12.233 } 00:19:12.233 } 00:19:12.233 ] 00:19:12.233 }, 00:19:12.233 { 00:19:12.233 "subsystem": "iobuf", 00:19:12.233 "config": [ 00:19:12.233 { 00:19:12.233 "method": "iobuf_set_options", 00:19:12.233 "params": { 00:19:12.233 "small_pool_count": 8192, 00:19:12.233 "large_pool_count": 1024, 00:19:12.233 "small_bufsize": 8192, 00:19:12.233 "large_bufsize": 135168, 00:19:12.233 "enable_numa": false 00:19:12.233 } 00:19:12.233 } 00:19:12.233 ] 00:19:12.233 }, 00:19:12.233 { 00:19:12.233 "subsystem": "sock", 00:19:12.233 "config": [ 00:19:12.233 { 00:19:12.233 "method": "sock_set_default_impl", 00:19:12.233 "params": { 00:19:12.233 "impl_name": "posix" 00:19:12.233 } 00:19:12.233 }, 00:19:12.233 { 00:19:12.233 "method": "sock_impl_set_options", 00:19:12.233 "params": { 00:19:12.233 "impl_name": "ssl", 00:19:12.233 "recv_buf_size": 4096, 00:19:12.233 "send_buf_size": 4096, 00:19:12.233 "enable_recv_pipe": true, 00:19:12.233 "enable_quickack": false, 00:19:12.233 "enable_placement_id": 0, 00:19:12.233 "enable_zerocopy_send_server": true, 00:19:12.233 "enable_zerocopy_send_client": false, 00:19:12.233 "zerocopy_threshold": 0, 00:19:12.233 "tls_version": 0, 00:19:12.233 "enable_ktls": false 00:19:12.233 } 00:19:12.233 }, 00:19:12.233 { 00:19:12.233 "method": "sock_impl_set_options", 00:19:12.233 "params": { 00:19:12.233 "impl_name": "posix", 00:19:12.233 "recv_buf_size": 2097152, 00:19:12.233 "send_buf_size": 2097152, 00:19:12.233 "enable_recv_pipe": true, 00:19:12.233 "enable_quickack": false, 00:19:12.233 "enable_placement_id": 0, 00:19:12.233 "enable_zerocopy_send_server": true, 00:19:12.233 "enable_zerocopy_send_client": false, 00:19:12.234 "zerocopy_threshold": 0, 00:19:12.234 "tls_version": 0, 00:19:12.234 "enable_ktls": false 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "vmd", 00:19:12.234 "config": [] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "accel", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "accel_set_options", 00:19:12.234 "params": { 00:19:12.234 "small_cache_size": 128, 00:19:12.234 "large_cache_size": 16, 00:19:12.234 "task_count": 2048, 00:19:12.234 "sequence_count": 2048, 00:19:12.234 "buf_count": 2048 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "bdev", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "bdev_set_options", 00:19:12.234 "params": { 00:19:12.234 "bdev_io_pool_size": 65535, 00:19:12.234 "bdev_io_cache_size": 256, 00:19:12.234 "bdev_auto_examine": true, 00:19:12.234 "iobuf_small_cache_size": 128, 00:19:12.234 "iobuf_large_cache_size": 16 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_raid_set_options", 00:19:12.234 "params": { 00:19:12.234 "process_window_size_kb": 1024, 00:19:12.234 "process_max_bandwidth_mb_sec": 0 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_iscsi_set_options", 00:19:12.234 "params": { 00:19:12.234 "timeout_sec": 30 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_nvme_set_options", 00:19:12.234 "params": { 00:19:12.234 "action_on_timeout": "none", 00:19:12.234 "timeout_us": 0, 00:19:12.234 "timeout_admin_us": 0, 00:19:12.234 "keep_alive_timeout_ms": 10000, 00:19:12.234 "arbitration_burst": 0, 00:19:12.234 "low_priority_weight": 0, 00:19:12.234 "medium_priority_weight": 0, 00:19:12.234 "high_priority_weight": 0, 00:19:12.234 "nvme_adminq_poll_period_us": 10000, 00:19:12.234 "nvme_ioq_poll_period_us": 0, 00:19:12.234 "io_queue_requests": 0, 00:19:12.234 "delay_cmd_submit": true, 00:19:12.234 "transport_retry_count": 4, 00:19:12.234 "bdev_retry_count": 3, 00:19:12.234 "transport_ack_timeout": 0, 00:19:12.234 "ctrlr_loss_timeout_sec": 0, 00:19:12.234 "reconnect_delay_sec": 0, 00:19:12.234 "fast_io_fail_timeout_sec": 0, 00:19:12.234 "disable_auto_failback": false, 00:19:12.234 "generate_uuids": false, 00:19:12.234 "transport_tos": 0, 00:19:12.234 "nvme_error_stat": false, 00:19:12.234 "rdma_srq_size": 0, 00:19:12.234 "io_path_stat": false, 00:19:12.234 "allow_accel_sequence": false, 00:19:12.234 "rdma_max_cq_size": 0, 00:19:12.234 "rdma_cm_event_timeout_ms": 0, 00:19:12.234 "dhchap_digests": [ 00:19:12.234 "sha256", 00:19:12.234 "sha384", 00:19:12.234 "sha512" 00:19:12.234 ], 00:19:12.234 "dhchap_dhgroups": [ 00:19:12.234 "null", 00:19:12.234 "ffdhe2048", 00:19:12.234 "ffdhe3072", 00:19:12.234 "ffdhe4096", 00:19:12.234 "ffdhe6144", 00:19:12.234 "ffdhe8192" 00:19:12.234 ] 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_nvme_set_hotplug", 00:19:12.234 "params": { 00:19:12.234 "period_us": 100000, 00:19:12.234 "enable": false 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_malloc_create", 00:19:12.234 "params": { 00:19:12.234 "name": "malloc0", 00:19:12.234 "num_blocks": 8192, 00:19:12.234 "block_size": 4096, 00:19:12.234 "physical_block_size": 4096, 00:19:12.234 "uuid": "1c66fb49-1d1f-44dc-9d8e-6b37f38961d7", 00:19:12.234 "optimal_io_boundary": 0, 00:19:12.234 "md_size": 0, 00:19:12.234 "dif_type": 0, 00:19:12.234 "dif_is_head_of_md": false, 00:19:12.234 "dif_pi_format": 0 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "bdev_wait_for_examine" 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "nbd", 00:19:12.234 "config": [] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "scheduler", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "framework_set_scheduler", 00:19:12.234 "params": { 00:19:12.234 "name": "static" 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "nvmf", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "nvmf_set_config", 00:19:12.234 "params": { 00:19:12.234 "discovery_filter": "match_any", 00:19:12.234 "admin_cmd_passthru": { 00:19:12.234 "identify_ctrlr": false 00:19:12.234 }, 00:19:12.234 "dhchap_digests": [ 00:19:12.234 "sha256", 00:19:12.234 "sha384", 00:19:12.234 "sha512" 00:19:12.234 ], 00:19:12.234 "dhchap_dhgroups": [ 00:19:12.234 "null", 00:19:12.234 "ffdhe2048", 00:19:12.234 "ffdhe3072", 00:19:12.234 "ffdhe4096", 00:19:12.234 "ffdhe6144", 00:19:12.234 "ffdhe8192" 00:19:12.234 ] 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_set_max_subsystems", 00:19:12.234 "params": { 00:19:12.234 "max_subsystems": 1024 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_set_crdt", 00:19:12.234 "params": { 00:19:12.234 "crdt1": 0, 00:19:12.234 "crdt2": 0, 00:19:12.234 "crdt3": 0 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_create_transport", 00:19:12.234 "params": { 00:19:12.234 "trtype": "TCP", 00:19:12.234 "max_queue_depth": 128, 00:19:12.234 "max_io_qpairs_per_ctrlr": 127, 00:19:12.234 "in_capsule_data_size": 4096, 00:19:12.234 "max_io_size": 131072, 00:19:12.234 "io_unit_size": 131072, 00:19:12.234 "max_aq_depth": 128, 00:19:12.234 "num_shared_buffers": 511, 00:19:12.234 "buf_cache_size": 4294967295, 00:19:12.234 "dif_insert_or_strip": false, 00:19:12.234 "zcopy": false, 00:19:12.234 "c2h_success": false, 00:19:12.234 "sock_priority": 0, 00:19:12.234 "abort_timeout_sec": 1, 00:19:12.234 "ack_timeout": 0, 00:19:12.234 "data_wr_pool_size": 0 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_create_subsystem", 00:19:12.234 "params": { 00:19:12.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.234 "allow_any_host": false, 00:19:12.234 "serial_number": "00000000000000000000", 00:19:12.234 "model_number": "SPDK bdev Controller", 00:19:12.234 "max_namespaces": 32, 00:19:12.234 "min_cntlid": 1, 00:19:12.234 "max_cntlid": 65519, 00:19:12.234 "ana_reporting": false 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_subsystem_add_host", 00:19:12.234 "params": { 00:19:12.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.234 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.234 "psk": "key0" 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_subsystem_add_ns", 00:19:12.234 "params": { 00:19:12.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.234 "namespace": { 00:19:12.234 "nsid": 1, 00:19:12.234 "bdev_name": "malloc0", 00:19:12.234 "nguid": "1C66FB491D1F44DC9D8E6B37F38961D7", 00:19:12.234 "uuid": "1c66fb49-1d1f-44dc-9d8e-6b37f38961d7", 00:19:12.234 "no_auto_visible": false 00:19:12.234 } 00:19:12.234 } 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "method": "nvmf_subsystem_add_listener", 00:19:12.234 "params": { 00:19:12.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.234 "listen_address": { 00:19:12.234 "trtype": "TCP", 00:19:12.234 "adrfam": "IPv4", 00:19:12.234 "traddr": "10.0.0.2", 00:19:12.234 "trsvcid": "4420" 00:19:12.234 }, 00:19:12.234 "secure_channel": false, 00:19:12.234 "sock_impl": "ssl" 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }' 00:19:12.234 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:12.234 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:12.234 "subsystems": [ 00:19:12.234 { 00:19:12.234 "subsystem": "keyring", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "keyring_file_add_key", 00:19:12.234 "params": { 00:19:12.234 "name": "key0", 00:19:12.234 "path": "/tmp/tmp.Q1ZPp63qoV" 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "iobuf", 00:19:12.234 "config": [ 00:19:12.234 { 00:19:12.234 "method": "iobuf_set_options", 00:19:12.234 "params": { 00:19:12.234 "small_pool_count": 8192, 00:19:12.234 "large_pool_count": 1024, 00:19:12.234 "small_bufsize": 8192, 00:19:12.234 "large_bufsize": 135168, 00:19:12.234 "enable_numa": false 00:19:12.234 } 00:19:12.234 } 00:19:12.234 ] 00:19:12.234 }, 00:19:12.234 { 00:19:12.234 "subsystem": "sock", 00:19:12.235 "config": [ 00:19:12.235 { 00:19:12.235 "method": "sock_set_default_impl", 00:19:12.235 "params": { 00:19:12.235 "impl_name": "posix" 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "sock_impl_set_options", 00:19:12.235 "params": { 00:19:12.235 "impl_name": "ssl", 00:19:12.235 "recv_buf_size": 4096, 00:19:12.235 "send_buf_size": 4096, 00:19:12.235 "enable_recv_pipe": true, 00:19:12.235 "enable_quickack": false, 00:19:12.235 "enable_placement_id": 0, 00:19:12.235 "enable_zerocopy_send_server": true, 00:19:12.235 "enable_zerocopy_send_client": false, 00:19:12.235 "zerocopy_threshold": 0, 00:19:12.235 "tls_version": 0, 00:19:12.235 "enable_ktls": false 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "sock_impl_set_options", 00:19:12.235 "params": { 00:19:12.235 "impl_name": "posix", 00:19:12.235 "recv_buf_size": 2097152, 00:19:12.235 "send_buf_size": 2097152, 00:19:12.235 "enable_recv_pipe": true, 00:19:12.235 "enable_quickack": false, 00:19:12.235 "enable_placement_id": 0, 00:19:12.235 "enable_zerocopy_send_server": true, 00:19:12.235 "enable_zerocopy_send_client": false, 00:19:12.235 "zerocopy_threshold": 0, 00:19:12.235 "tls_version": 0, 00:19:12.235 "enable_ktls": false 00:19:12.235 } 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "vmd", 00:19:12.235 "config": [] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "accel", 00:19:12.235 "config": [ 00:19:12.235 { 00:19:12.235 "method": "accel_set_options", 00:19:12.235 "params": { 00:19:12.235 "small_cache_size": 128, 00:19:12.235 "large_cache_size": 16, 00:19:12.235 "task_count": 2048, 00:19:12.235 "sequence_count": 2048, 00:19:12.235 "buf_count": 2048 00:19:12.235 } 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "bdev", 00:19:12.235 "config": [ 00:19:12.235 { 00:19:12.235 "method": "bdev_set_options", 00:19:12.235 "params": { 00:19:12.235 "bdev_io_pool_size": 65535, 00:19:12.235 "bdev_io_cache_size": 256, 00:19:12.235 "bdev_auto_examine": true, 00:19:12.235 "iobuf_small_cache_size": 128, 00:19:12.235 "iobuf_large_cache_size": 16 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_raid_set_options", 00:19:12.235 "params": { 00:19:12.235 "process_window_size_kb": 1024, 00:19:12.235 "process_max_bandwidth_mb_sec": 0 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_iscsi_set_options", 00:19:12.235 "params": { 00:19:12.235 "timeout_sec": 30 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_nvme_set_options", 00:19:12.235 "params": { 00:19:12.235 "action_on_timeout": "none", 00:19:12.235 "timeout_us": 0, 00:19:12.235 "timeout_admin_us": 0, 00:19:12.235 "keep_alive_timeout_ms": 10000, 00:19:12.235 "arbitration_burst": 0, 00:19:12.235 "low_priority_weight": 0, 00:19:12.235 "medium_priority_weight": 0, 00:19:12.235 "high_priority_weight": 0, 00:19:12.235 "nvme_adminq_poll_period_us": 10000, 00:19:12.235 "nvme_ioq_poll_period_us": 0, 00:19:12.235 "io_queue_requests": 512, 00:19:12.235 "delay_cmd_submit": true, 00:19:12.235 "transport_retry_count": 4, 00:19:12.235 "bdev_retry_count": 3, 00:19:12.235 "transport_ack_timeout": 0, 00:19:12.235 "ctrlr_loss_timeout_sec": 0, 00:19:12.235 "reconnect_delay_sec": 0, 00:19:12.235 "fast_io_fail_timeout_sec": 0, 00:19:12.235 "disable_auto_failback": false, 00:19:12.235 "generate_uuids": false, 00:19:12.235 "transport_tos": 0, 00:19:12.235 "nvme_error_stat": false, 00:19:12.235 "rdma_srq_size": 0, 00:19:12.235 "io_path_stat": false, 00:19:12.235 "allow_accel_sequence": false, 00:19:12.235 "rdma_max_cq_size": 0, 00:19:12.235 "rdma_cm_event_timeout_ms": 0, 00:19:12.235 "dhchap_digests": [ 00:19:12.235 "sha256", 00:19:12.235 "sha384", 00:19:12.235 "sha512" 00:19:12.235 ], 00:19:12.235 "dhchap_dhgroups": [ 00:19:12.235 "null", 00:19:12.235 "ffdhe2048", 00:19:12.235 "ffdhe3072", 00:19:12.235 "ffdhe4096", 00:19:12.235 "ffdhe6144", 00:19:12.235 "ffdhe8192" 00:19:12.235 ] 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_nvme_attach_controller", 00:19:12.235 "params": { 00:19:12.235 "name": "nvme0", 00:19:12.235 "trtype": "TCP", 00:19:12.235 "adrfam": "IPv4", 00:19:12.235 "traddr": "10.0.0.2", 00:19:12.235 "trsvcid": "4420", 00:19:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.235 "prchk_reftag": false, 00:19:12.235 "prchk_guard": false, 00:19:12.235 "ctrlr_loss_timeout_sec": 0, 00:19:12.235 "reconnect_delay_sec": 0, 00:19:12.235 "fast_io_fail_timeout_sec": 0, 00:19:12.235 "psk": "key0", 00:19:12.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.235 "hdgst": false, 00:19:12.235 "ddgst": false, 00:19:12.235 "multipath": "multipath" 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_nvme_set_hotplug", 00:19:12.235 "params": { 00:19:12.235 "period_us": 100000, 00:19:12.235 "enable": false 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_enable_histogram", 00:19:12.235 "params": { 00:19:12.235 "name": "nvme0n1", 00:19:12.235 "enable": true 00:19:12.235 } 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "method": "bdev_wait_for_examine" 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }, 00:19:12.235 { 00:19:12.235 "subsystem": "nbd", 00:19:12.235 "config": [] 00:19:12.235 } 00:19:12.235 ] 00:19:12.235 }' 00:19:12.235 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2678809 00:19:12.235 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678809 ']' 00:19:12.235 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678809 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678809 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678809' 00:19:12.495 killing process with pid 2678809 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678809 00:19:12.495 Received shutdown signal, test time was about 1.000000 seconds 00:19:12.495 00:19:12.495 Latency(us) 00:19:12.495 [2024-11-20T08:57:46.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.495 [2024-11-20T08:57:46.077Z] =================================================================================================================== 00:19:12.495 [2024-11-20T08:57:46.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.495 09:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678809 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2678627 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2678627 ']' 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2678627 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2678627 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2678627' 00:19:12.495 killing process with pid 2678627 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2678627 00:19:12.495 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2678627 00:19:12.755 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:12.755 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.755 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.755 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:12.755 "subsystems": [ 00:19:12.755 { 00:19:12.755 "subsystem": "keyring", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "keyring_file_add_key", 00:19:12.755 "params": { 00:19:12.755 "name": "key0", 00:19:12.755 "path": "/tmp/tmp.Q1ZPp63qoV" 00:19:12.755 } 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "iobuf", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "iobuf_set_options", 00:19:12.755 "params": { 00:19:12.755 "small_pool_count": 8192, 00:19:12.755 "large_pool_count": 1024, 00:19:12.755 "small_bufsize": 8192, 00:19:12.755 "large_bufsize": 135168, 00:19:12.755 "enable_numa": false 00:19:12.755 } 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "sock", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "sock_set_default_impl", 00:19:12.755 "params": { 00:19:12.755 "impl_name": "posix" 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "sock_impl_set_options", 00:19:12.755 "params": { 00:19:12.755 "impl_name": "ssl", 00:19:12.755 "recv_buf_size": 4096, 00:19:12.755 "send_buf_size": 4096, 00:19:12.755 "enable_recv_pipe": true, 00:19:12.755 "enable_quickack": false, 00:19:12.755 "enable_placement_id": 0, 00:19:12.755 "enable_zerocopy_send_server": true, 00:19:12.755 "enable_zerocopy_send_client": false, 00:19:12.755 "zerocopy_threshold": 0, 00:19:12.755 "tls_version": 0, 00:19:12.755 "enable_ktls": false 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "sock_impl_set_options", 00:19:12.755 "params": { 00:19:12.755 "impl_name": "posix", 00:19:12.755 "recv_buf_size": 2097152, 00:19:12.755 "send_buf_size": 2097152, 00:19:12.755 "enable_recv_pipe": true, 00:19:12.755 "enable_quickack": false, 00:19:12.755 "enable_placement_id": 0, 00:19:12.755 "enable_zerocopy_send_server": true, 00:19:12.755 "enable_zerocopy_send_client": false, 00:19:12.755 "zerocopy_threshold": 0, 00:19:12.755 "tls_version": 0, 00:19:12.755 "enable_ktls": false 00:19:12.755 } 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "vmd", 00:19:12.755 "config": [] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "accel", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "accel_set_options", 00:19:12.755 "params": { 00:19:12.755 "small_cache_size": 128, 00:19:12.755 "large_cache_size": 16, 00:19:12.755 "task_count": 2048, 00:19:12.755 "sequence_count": 2048, 00:19:12.755 "buf_count": 2048 00:19:12.755 } 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "bdev", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "bdev_set_options", 00:19:12.755 "params": { 00:19:12.755 "bdev_io_pool_size": 65535, 00:19:12.755 "bdev_io_cache_size": 256, 00:19:12.755 "bdev_auto_examine": true, 00:19:12.755 "iobuf_small_cache_size": 128, 00:19:12.755 "iobuf_large_cache_size": 16 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_raid_set_options", 00:19:12.755 "params": { 00:19:12.755 "process_window_size_kb": 1024, 00:19:12.755 "process_max_bandwidth_mb_sec": 0 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_iscsi_set_options", 00:19:12.755 "params": { 00:19:12.755 "timeout_sec": 30 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_nvme_set_options", 00:19:12.755 "params": { 00:19:12.755 "action_on_timeout": "none", 00:19:12.755 "timeout_us": 0, 00:19:12.755 "timeout_admin_us": 0, 00:19:12.755 "keep_alive_timeout_ms": 10000, 00:19:12.755 "arbitration_burst": 0, 00:19:12.755 "low_priority_weight": 0, 00:19:12.755 "medium_priority_weight": 0, 00:19:12.755 "high_priority_weight": 0, 00:19:12.755 "nvme_adminq_poll_period_us": 10000, 00:19:12.755 "nvme_ioq_poll_period_us": 0, 00:19:12.755 "io_queue_requests": 0, 00:19:12.755 "delay_cmd_submit": true, 00:19:12.755 "transport_retry_count": 4, 00:19:12.755 "bdev_retry_count": 3, 00:19:12.755 "transport_ack_timeout": 0, 00:19:12.755 "ctrlr_loss_timeout_sec": 0, 00:19:12.755 "reconnect_delay_sec": 0, 00:19:12.755 "fast_io_fail_timeout_sec": 0, 00:19:12.755 "disable_auto_failback": false, 00:19:12.755 "generate_uuids": false, 00:19:12.755 "transport_tos": 0, 00:19:12.755 "nvme_error_stat": false, 00:19:12.755 "rdma_srq_size": 0, 00:19:12.755 "io_path_stat": false, 00:19:12.755 "allow_accel_sequence": false, 00:19:12.755 "rdma_max_cq_size": 0, 00:19:12.755 "rdma_cm_event_timeout_ms": 0, 00:19:12.755 "dhchap_digests": [ 00:19:12.755 "sha256", 00:19:12.755 "sha384", 00:19:12.755 "sha512" 00:19:12.755 ], 00:19:12.755 "dhchap_dhgroups": [ 00:19:12.755 "null", 00:19:12.755 "ffdhe2048", 00:19:12.755 "ffdhe3072", 00:19:12.755 "ffdhe4096", 00:19:12.755 "ffdhe6144", 00:19:12.755 "ffdhe8192" 00:19:12.755 ] 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_nvme_set_hotplug", 00:19:12.755 "params": { 00:19:12.755 "period_us": 100000, 00:19:12.755 "enable": false 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_malloc_create", 00:19:12.755 "params": { 00:19:12.755 "name": "malloc0", 00:19:12.755 "num_blocks": 8192, 00:19:12.755 "block_size": 4096, 00:19:12.755 "physical_block_size": 4096, 00:19:12.755 "uuid": "1c66fb49-1d1f-44dc-9d8e-6b37f38961d7", 00:19:12.755 "optimal_io_boundary": 0, 00:19:12.755 "md_size": 0, 00:19:12.755 "dif_type": 0, 00:19:12.755 "dif_is_head_of_md": false, 00:19:12.755 "dif_pi_format": 0 00:19:12.755 } 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "method": "bdev_wait_for_examine" 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "nbd", 00:19:12.755 "config": [] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "scheduler", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "framework_set_scheduler", 00:19:12.755 "params": { 00:19:12.755 "name": "static" 00:19:12.755 } 00:19:12.755 } 00:19:12.755 ] 00:19:12.755 }, 00:19:12.755 { 00:19:12.755 "subsystem": "nvmf", 00:19:12.755 "config": [ 00:19:12.755 { 00:19:12.755 "method": "nvmf_set_config", 00:19:12.755 "params": { 00:19:12.756 "discovery_filter": "match_any", 00:19:12.756 "admin_cmd_passthru": { 00:19:12.756 "identify_ctrlr": false 00:19:12.756 }, 00:19:12.756 "dhchap_digests": [ 00:19:12.756 "sha256", 00:19:12.756 "sha384", 00:19:12.756 "sha512" 00:19:12.756 ], 00:19:12.756 "dhchap_dhgroups": [ 00:19:12.756 "null", 00:19:12.756 "ffdhe2048", 00:19:12.756 "ffdhe3072", 00:19:12.756 "ffdhe4096", 00:19:12.756 "ffdhe6144", 00:19:12.756 "ffdhe8192" 00:19:12.756 ] 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_set_max_subsystems", 00:19:12.756 "params": { 00:19:12.756 "max_subsystems": 1024 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_set_crdt", 00:19:12.756 "params": { 00:19:12.756 "crdt1": 0, 00:19:12.756 "crdt2": 0, 00:19:12.756 "crdt3": 0 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_create_transport", 00:19:12.756 "params": { 00:19:12.756 "trtype": "TCP", 00:19:12.756 "max_queue_depth": 128, 00:19:12.756 "max_io_qpairs_per_ctrlr": 127, 00:19:12.756 "in_capsule_data_size": 4096, 00:19:12.756 "max_io_size": 131072, 00:19:12.756 "io_unit_size": 131072, 00:19:12.756 "max_aq_depth": 128, 00:19:12.756 "num_shared_buffers": 511, 00:19:12.756 "buf_cache_size": 4294967295, 00:19:12.756 "dif_insert_or_strip": false, 00:19:12.756 "zcopy": false, 00:19:12.756 "c2h_success": false, 00:19:12.756 "sock_priority": 0, 00:19:12.756 "abort_timeout_sec": 1, 00:19:12.756 "ack_timeout": 0, 00:19:12.756 "data_wr_pool_size": 0 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_create_subsystem", 00:19:12.756 "params": { 00:19:12.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.756 "allow_any_host": false, 00:19:12.756 "serial_number": "00000000000000000000", 00:19:12.756 "model_number": "SPDK bdev Controller", 00:19:12.756 "max_namespaces": 32, 00:19:12.756 "min_cntlid": 1, 00:19:12.756 "max_cntlid": 65519, 00:19:12.756 "ana_reporting": false 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_subsystem_add_host", 00:19:12.756 "params": { 00:19:12.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.756 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.756 "psk": "key0" 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_subsystem_add_ns", 00:19:12.756 "params": { 00:19:12.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.756 "namespace": { 00:19:12.756 "nsid": 1, 00:19:12.756 "bdev_name": "malloc0", 00:19:12.756 "nguid": "1C66FB491D1F44DC9D8E6B37F38961D7", 00:19:12.756 "uuid": "1c66fb49-1d1f-44dc-9d8e-6b37f38961d7", 00:19:12.756 "no_auto_visible": false 00:19:12.756 } 00:19:12.756 } 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "method": "nvmf_subsystem_add_listener", 00:19:12.756 "params": { 00:19:12.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.756 "listen_address": { 00:19:12.756 "trtype": "TCP", 00:19:12.756 "adrfam": "IPv4", 00:19:12.756 "traddr": "10.0.0.2", 00:19:12.756 "trsvcid": "4420" 00:19:12.756 }, 00:19:12.756 "secure_channel": false, 00:19:12.756 "sock_impl": "ssl" 00:19:12.756 } 00:19:12.756 } 00:19:12.756 ] 00:19:12.756 } 00:19:12.756 ] 00:19:12.756 }' 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2679189 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2679189 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2679189 ']' 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.756 09:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.756 [2024-11-20 09:57:46.293172] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:12.756 [2024-11-20 09:57:46.293224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.015 [2024-11-20 09:57:46.367610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.015 [2024-11-20 09:57:46.407591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.015 [2024-11-20 09:57:46.407627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.015 [2024-11-20 09:57:46.407635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.015 [2024-11-20 09:57:46.407641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.015 [2024-11-20 09:57:46.407647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.015 [2024-11-20 09:57:46.408226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.274 [2024-11-20 09:57:46.618494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.274 [2024-11-20 09:57:46.650520] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.274 [2024-11-20 09:57:46.650747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2679361 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2679361 /var/tmp/bdevperf.sock 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2679361 ']' 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.843 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:13.843 "subsystems": [ 00:19:13.843 { 00:19:13.843 "subsystem": "keyring", 00:19:13.843 "config": [ 00:19:13.843 { 00:19:13.843 "method": "keyring_file_add_key", 00:19:13.843 "params": { 00:19:13.843 "name": "key0", 00:19:13.843 "path": "/tmp/tmp.Q1ZPp63qoV" 00:19:13.843 } 00:19:13.843 } 00:19:13.843 ] 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "subsystem": "iobuf", 00:19:13.843 "config": [ 00:19:13.843 { 00:19:13.843 "method": "iobuf_set_options", 00:19:13.843 "params": { 00:19:13.843 "small_pool_count": 8192, 00:19:13.843 "large_pool_count": 1024, 00:19:13.843 "small_bufsize": 8192, 00:19:13.843 "large_bufsize": 135168, 00:19:13.843 "enable_numa": false 00:19:13.843 } 00:19:13.843 } 00:19:13.843 ] 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "subsystem": "sock", 00:19:13.843 "config": [ 00:19:13.843 { 00:19:13.843 "method": "sock_set_default_impl", 00:19:13.843 "params": { 00:19:13.843 "impl_name": "posix" 00:19:13.843 } 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "method": "sock_impl_set_options", 00:19:13.843 "params": { 00:19:13.843 "impl_name": "ssl", 00:19:13.843 "recv_buf_size": 4096, 00:19:13.843 "send_buf_size": 4096, 00:19:13.843 "enable_recv_pipe": true, 00:19:13.843 "enable_quickack": false, 00:19:13.843 "enable_placement_id": 0, 00:19:13.843 "enable_zerocopy_send_server": true, 00:19:13.843 "enable_zerocopy_send_client": false, 00:19:13.843 "zerocopy_threshold": 0, 00:19:13.843 "tls_version": 0, 00:19:13.843 "enable_ktls": false 00:19:13.843 } 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "method": "sock_impl_set_options", 00:19:13.843 "params": { 00:19:13.843 "impl_name": "posix", 00:19:13.843 "recv_buf_size": 2097152, 00:19:13.843 "send_buf_size": 2097152, 00:19:13.843 "enable_recv_pipe": true, 00:19:13.843 "enable_quickack": false, 00:19:13.843 "enable_placement_id": 0, 00:19:13.843 "enable_zerocopy_send_server": true, 00:19:13.843 "enable_zerocopy_send_client": false, 00:19:13.843 "zerocopy_threshold": 0, 00:19:13.843 "tls_version": 0, 00:19:13.843 "enable_ktls": false 00:19:13.843 } 00:19:13.843 } 00:19:13.843 ] 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "subsystem": "vmd", 00:19:13.843 "config": [] 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "subsystem": "accel", 00:19:13.843 "config": [ 00:19:13.843 { 00:19:13.843 "method": "accel_set_options", 00:19:13.843 "params": { 00:19:13.843 "small_cache_size": 128, 00:19:13.843 "large_cache_size": 16, 00:19:13.843 "task_count": 2048, 00:19:13.843 "sequence_count": 2048, 00:19:13.843 "buf_count": 2048 00:19:13.843 } 00:19:13.843 } 00:19:13.843 ] 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "subsystem": "bdev", 00:19:13.843 "config": [ 00:19:13.843 { 00:19:13.843 "method": "bdev_set_options", 00:19:13.843 "params": { 00:19:13.843 "bdev_io_pool_size": 65535, 00:19:13.843 "bdev_io_cache_size": 256, 00:19:13.843 "bdev_auto_examine": true, 00:19:13.843 "iobuf_small_cache_size": 128, 00:19:13.843 "iobuf_large_cache_size": 16 00:19:13.843 } 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "method": "bdev_raid_set_options", 00:19:13.843 "params": { 00:19:13.843 "process_window_size_kb": 1024, 00:19:13.843 "process_max_bandwidth_mb_sec": 0 00:19:13.843 } 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "method": "bdev_iscsi_set_options", 00:19:13.843 "params": { 00:19:13.843 "timeout_sec": 30 00:19:13.843 } 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "method": "bdev_nvme_set_options", 00:19:13.843 "params": { 00:19:13.843 "action_on_timeout": "none", 00:19:13.843 "timeout_us": 0, 00:19:13.843 "timeout_admin_us": 0, 00:19:13.843 "keep_alive_timeout_ms": 10000, 00:19:13.843 "arbitration_burst": 0, 00:19:13.843 "low_priority_weight": 0, 00:19:13.843 "medium_priority_weight": 0, 00:19:13.843 "high_priority_weight": 0, 00:19:13.843 "nvme_adminq_poll_period_us": 10000, 00:19:13.843 "nvme_ioq_poll_period_us": 0, 00:19:13.843 "io_queue_requests": 512, 00:19:13.843 "delay_cmd_submit": true, 00:19:13.844 "transport_retry_count": 4, 00:19:13.844 "bdev_retry_count": 3, 00:19:13.844 "transport_ack_timeout": 0, 00:19:13.844 "ctrlr_loss_timeout_sec": 0, 00:19:13.844 "reconnect_delay_sec": 0, 00:19:13.844 "fast_io_fail_timeout_sec": 0, 00:19:13.844 "disable_auto_failback": false, 00:19:13.844 "generate_uuids": false, 00:19:13.844 "transport_tos": 0, 00:19:13.844 "nvme_error_stat": false, 00:19:13.844 "rdma_srq_size": 0, 00:19:13.844 "io_path_stat": false, 00:19:13.844 "allow_accel_sequence": false, 00:19:13.844 "rdma_max_cq_size": 0, 00:19:13.844 "rdma_cm_event_timeout_ms": 0, 00:19:13.844 "dhchap_digests": [ 00:19:13.844 "sha256", 00:19:13.844 "sha384", 00:19:13.844 "sha512" 00:19:13.844 ], 00:19:13.844 "dhchap_dhgroups": [ 00:19:13.844 "null", 00:19:13.844 "ffdhe2048", 00:19:13.844 "ffdhe3072", 00:19:13.844 "ffdhe4096", 00:19:13.844 "ffdhe6144", 00:19:13.844 "ffdhe8192" 00:19:13.844 ] 00:19:13.844 } 00:19:13.844 }, 00:19:13.844 { 00:19:13.844 "method": "bdev_nvme_attach_controller", 00:19:13.844 "params": { 00:19:13.844 "name": "nvme0", 00:19:13.844 "trtype": "TCP", 00:19:13.844 "adrfam": "IPv4", 00:19:13.844 "traddr": "10.0.0.2", 00:19:13.844 "trsvcid": "4420", 00:19:13.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.844 "prchk_reftag": false, 00:19:13.844 "prchk_guard": false, 00:19:13.844 "ctrlr_loss_timeout_sec": 0, 00:19:13.844 "reconnect_delay_sec": 0, 00:19:13.844 "fast_io_fail_timeout_sec": 0, 00:19:13.844 "psk": "key0", 00:19:13.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.844 "hdgst": false, 00:19:13.844 "ddgst": false, 00:19:13.844 "multipath": "multipath" 00:19:13.844 } 00:19:13.844 }, 00:19:13.844 { 00:19:13.844 "method": "bdev_nvme_set_hotplug", 00:19:13.844 "params": { 00:19:13.844 "period_us": 100000, 00:19:13.844 "enable": false 00:19:13.844 } 00:19:13.844 }, 00:19:13.844 { 00:19:13.844 "method": "bdev_enable_histogram", 00:19:13.844 "params": { 00:19:13.844 "name": "nvme0n1", 00:19:13.844 "enable": true 00:19:13.844 } 00:19:13.844 }, 00:19:13.844 { 00:19:13.844 "method": "bdev_wait_for_examine" 00:19:13.844 } 00:19:13.844 ] 00:19:13.844 }, 00:19:13.844 { 00:19:13.844 "subsystem": "nbd", 00:19:13.844 "config": [] 00:19:13.844 } 00:19:13.844 ] 00:19:13.844 }' 00:19:13.844 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.844 09:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.844 [2024-11-20 09:57:47.196085] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:13.844 [2024-11-20 09:57:47.196132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679361 ] 00:19:13.844 [2024-11-20 09:57:47.271821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.844 [2024-11-20 09:57:47.313236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.103 [2024-11-20 09:57:47.466085] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.669 09:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.928 Running I/O for 1 seconds... 00:19:15.862 5092.00 IOPS, 19.89 MiB/s 00:19:15.862 Latency(us) 00:19:15.862 [2024-11-20T08:57:49.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:15.862 Verification LBA range: start 0x0 length 0x2000 00:19:15.862 nvme0n1 : 1.01 5152.75 20.13 0.00 0.00 24673.54 4899.60 65910.49 00:19:15.862 [2024-11-20T08:57:49.444Z] =================================================================================================================== 00:19:15.862 [2024-11-20T08:57:49.444Z] Total : 5152.75 20.13 0.00 0.00 24673.54 4899.60 65910.49 00:19:15.862 { 00:19:15.862 "results": [ 00:19:15.862 { 00:19:15.862 "job": "nvme0n1", 00:19:15.862 "core_mask": "0x2", 00:19:15.862 "workload": "verify", 00:19:15.862 "status": "finished", 00:19:15.862 "verify_range": { 00:19:15.862 "start": 0, 00:19:15.862 "length": 8192 00:19:15.862 }, 00:19:15.862 "queue_depth": 128, 00:19:15.862 "io_size": 4096, 00:19:15.862 "runtime": 1.013052, 00:19:15.862 "iops": 5152.746354580022, 00:19:15.862 "mibps": 20.12791544757821, 00:19:15.862 "io_failed": 0, 00:19:15.862 "io_timeout": 0, 00:19:15.862 "avg_latency_us": 24673.538755701513, 00:19:15.862 "min_latency_us": 4899.596190476191, 00:19:15.862 "max_latency_us": 65910.49142857143 00:19:15.862 } 00:19:15.862 ], 00:19:15.862 "core_count": 1 00:19:15.862 } 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:15.862 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:15.862 nvmf_trace.0 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2679361 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2679361 ']' 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2679361 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2679361 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2679361' 00:19:16.125 killing process with pid 2679361 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2679361 00:19:16.125 Received shutdown signal, test time was about 1.000000 seconds 00:19:16.125 00:19:16.125 Latency(us) 00:19:16.125 [2024-11-20T08:57:49.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.125 [2024-11-20T08:57:49.707Z] =================================================================================================================== 00:19:16.125 [2024-11-20T08:57:49.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2679361 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.125 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.125 rmmod nvme_tcp 00:19:16.125 rmmod nvme_fabrics 00:19:16.125 rmmod nvme_keyring 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2679189 ']' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2679189 ']' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2679189' 00:19:16.403 killing process with pid 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2679189 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.403 09:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.fGXqrDF2Mb /tmp/tmp.by3auwdbCe /tmp/tmp.Q1ZPp63qoV 00:19:18.972 00:19:18.972 real 1m19.934s 00:19:18.972 user 2m2.939s 00:19:18.972 sys 0m29.558s 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.972 ************************************ 00:19:18.972 END TEST nvmf_tls 00:19:18.972 ************************************ 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.972 ************************************ 00:19:18.972 START TEST nvmf_fips 00:19:18.972 ************************************ 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:18.972 * Looking for test storage... 00:19:18.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.972 --rc genhtml_branch_coverage=1 00:19:18.972 --rc genhtml_function_coverage=1 00:19:18.972 --rc genhtml_legend=1 00:19:18.972 --rc geninfo_all_blocks=1 00:19:18.972 --rc geninfo_unexecuted_blocks=1 00:19:18.972 00:19:18.972 ' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.972 --rc genhtml_branch_coverage=1 00:19:18.972 --rc genhtml_function_coverage=1 00:19:18.972 --rc genhtml_legend=1 00:19:18.972 --rc geninfo_all_blocks=1 00:19:18.972 --rc geninfo_unexecuted_blocks=1 00:19:18.972 00:19:18.972 ' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.972 --rc genhtml_branch_coverage=1 00:19:18.972 --rc genhtml_function_coverage=1 00:19:18.972 --rc genhtml_legend=1 00:19:18.972 --rc geninfo_all_blocks=1 00:19:18.972 --rc geninfo_unexecuted_blocks=1 00:19:18.972 00:19:18.972 ' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:18.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.972 --rc genhtml_branch_coverage=1 00:19:18.972 --rc genhtml_function_coverage=1 00:19:18.972 --rc genhtml_legend=1 00:19:18.972 --rc geninfo_all_blocks=1 00:19:18.972 --rc geninfo_unexecuted_blocks=1 00:19:18.972 00:19:18.972 ' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.972 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:18.973 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:18.974 Error setting digest 00:19:18.974 40A240D08D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:18.974 40A240D08D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.974 09:57:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.546 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.546 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:25.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:25.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:25.547 Found net devices under 0000:86:00.0: cvl_0_0 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:25.547 Found net devices under 0000:86:00.1: cvl_0_1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:25.547 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:25.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:19:25.547 00:19:25.547 --- 10.0.0.2 ping statistics --- 00:19:25.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.547 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:19:25.548 00:19:25.548 --- 10.0.0.1 ping statistics --- 00:19:25.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.548 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2683388 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2683388 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2683388 ']' 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.548 09:57:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.548 [2024-11-20 09:57:58.512492] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:25.548 [2024-11-20 09:57:58.512548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.548 [2024-11-20 09:57:58.583880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.548 [2024-11-20 09:57:58.624726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.548 [2024-11-20 09:57:58.624760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.548 [2024-11-20 09:57:58.624768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.548 [2024-11-20 09:57:58.624774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.548 [2024-11-20 09:57:58.624779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.548 [2024-11-20 09:57:58.625313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9pT 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:25.808 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9pT 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9pT 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9pT 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.066 [2024-11-20 09:57:59.558267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.066 [2024-11-20 09:57:59.574287] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.066 [2024-11-20 09:57:59.574488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.066 malloc0 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2683645 00:19:26.066 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.067 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2683645 /var/tmp/bdevperf.sock 00:19:26.067 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2683645 ']' 00:19:26.067 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.326 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.326 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.326 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.326 09:57:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.326 [2024-11-20 09:57:59.703592] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:26.326 [2024-11-20 09:57:59.703643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683645 ] 00:19:26.326 [2024-11-20 09:57:59.776478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.326 [2024-11-20 09:57:59.816479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.262 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.262 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:27.262 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9pT 00:19:27.262 09:58:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.521 [2024-11-20 09:58:00.908841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.521 TLSTESTn1 00:19:27.521 09:58:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.521 Running I/O for 10 seconds... 00:19:29.833 5213.00 IOPS, 20.36 MiB/s [2024-11-20T08:58:04.351Z] 5426.00 IOPS, 21.20 MiB/s [2024-11-20T08:58:05.288Z] 5476.33 IOPS, 21.39 MiB/s [2024-11-20T08:58:06.225Z] 5512.25 IOPS, 21.53 MiB/s [2024-11-20T08:58:07.163Z] 5537.80 IOPS, 21.63 MiB/s [2024-11-20T08:58:08.540Z] 5531.17 IOPS, 21.61 MiB/s [2024-11-20T08:58:09.476Z] 5538.29 IOPS, 21.63 MiB/s [2024-11-20T08:58:10.411Z] 5544.00 IOPS, 21.66 MiB/s [2024-11-20T08:58:11.349Z] 5552.89 IOPS, 21.69 MiB/s [2024-11-20T08:58:11.349Z] 5553.80 IOPS, 21.69 MiB/s 00:19:37.767 Latency(us) 00:19:37.767 [2024-11-20T08:58:11.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.767 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:37.767 Verification LBA range: start 0x0 length 0x2000 00:19:37.767 TLSTESTn1 : 10.02 5553.16 21.69 0.00 0.00 23009.51 5180.46 47435.58 00:19:37.767 [2024-11-20T08:58:11.349Z] =================================================================================================================== 00:19:37.767 [2024-11-20T08:58:11.349Z] Total : 5553.16 21.69 0.00 0.00 23009.51 5180.46 47435.58 00:19:37.767 { 00:19:37.767 "results": [ 00:19:37.767 { 00:19:37.767 "job": "TLSTESTn1", 00:19:37.767 "core_mask": "0x4", 00:19:37.767 "workload": "verify", 00:19:37.767 "status": "finished", 00:19:37.767 "verify_range": { 00:19:37.767 "start": 0, 00:19:37.767 "length": 8192 00:19:37.767 }, 00:19:37.767 "queue_depth": 128, 00:19:37.767 "io_size": 4096, 00:19:37.767 "runtime": 10.023837, 00:19:37.767 "iops": 5553.162925534403, 00:19:37.767 "mibps": 21.692042677868763, 00:19:37.767 "io_failed": 0, 00:19:37.767 "io_timeout": 0, 00:19:37.767 "avg_latency_us": 23009.50972337426, 00:19:37.767 "min_latency_us": 5180.464761904762, 00:19:37.767 "max_latency_us": 47435.58095238095 00:19:37.767 } 00:19:37.767 ], 00:19:37.767 "core_count": 1 00:19:37.767 } 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:37.767 nvmf_trace.0 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2683645 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2683645 ']' 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2683645 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683645 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683645' 00:19:37.767 killing process with pid 2683645 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2683645 00:19:37.767 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.767 00:19:37.767 Latency(us) 00:19:37.767 [2024-11-20T08:58:11.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.767 [2024-11-20T08:58:11.349Z] =================================================================================================================== 00:19:37.767 [2024-11-20T08:58:11.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.767 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2683645 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.027 rmmod nvme_tcp 00:19:38.027 rmmod nvme_fabrics 00:19:38.027 rmmod nvme_keyring 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2683388 ']' 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2683388 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2683388 ']' 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2683388 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683388 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683388' 00:19:38.027 killing process with pid 2683388 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2683388 00:19:38.027 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2683388 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.286 09:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9pT 00:19:40.824 00:19:40.824 real 0m21.732s 00:19:40.824 user 0m23.471s 00:19:40.824 sys 0m9.734s 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.824 ************************************ 00:19:40.824 END TEST nvmf_fips 00:19:40.824 ************************************ 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.824 ************************************ 00:19:40.824 START TEST nvmf_control_msg_list 00:19:40.824 ************************************ 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:40.824 * Looking for test storage... 00:19:40.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.824 09:58:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.824 --rc genhtml_branch_coverage=1 00:19:40.824 --rc genhtml_function_coverage=1 00:19:40.824 --rc genhtml_legend=1 00:19:40.824 --rc geninfo_all_blocks=1 00:19:40.824 --rc geninfo_unexecuted_blocks=1 00:19:40.824 00:19:40.824 ' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.824 --rc genhtml_branch_coverage=1 00:19:40.824 --rc genhtml_function_coverage=1 00:19:40.824 --rc genhtml_legend=1 00:19:40.824 --rc geninfo_all_blocks=1 00:19:40.824 --rc geninfo_unexecuted_blocks=1 00:19:40.824 00:19:40.824 ' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.824 --rc genhtml_branch_coverage=1 00:19:40.824 --rc genhtml_function_coverage=1 00:19:40.824 --rc genhtml_legend=1 00:19:40.824 --rc geninfo_all_blocks=1 00:19:40.824 --rc geninfo_unexecuted_blocks=1 00:19:40.824 00:19:40.824 ' 00:19:40.824 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.824 --rc genhtml_branch_coverage=1 00:19:40.824 --rc genhtml_function_coverage=1 00:19:40.824 --rc genhtml_legend=1 00:19:40.824 --rc geninfo_all_blocks=1 00:19:40.824 --rc geninfo_unexecuted_blocks=1 00:19:40.825 00:19:40.825 ' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.825 09:58:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.397 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.398 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.398 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.398 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.398 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.398 09:58:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:19:47.398 00:19:47.398 --- 10.0.0.2 ping statistics --- 00:19:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.398 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:19:47.398 00:19:47.398 --- 10.0.0.1 ping statistics --- 00:19:47.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.398 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.398 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2689015 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2689015 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2689015 ']' 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.399 [2024-11-20 09:58:20.151463] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:47.399 [2024-11-20 09:58:20.151507] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.399 [2024-11-20 09:58:20.231779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.399 [2024-11-20 09:58:20.271169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.399 [2024-11-20 09:58:20.271200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.399 [2024-11-20 09:58:20.271212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.399 [2024-11-20 09:58:20.271217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.399 [2024-11-20 09:58:20.271223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.399 [2024-11-20 09:58:20.271780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.399 09:58:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 [2024-11-20 09:58:21.016617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.658 Malloc0 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:47.658 [2024-11-20 09:58:21.056789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2689261 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2689262 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2689263 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2689261 00:19:47.658 09:58:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:47.658 [2024-11-20 09:58:21.135148] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:47.658 [2024-11-20 09:58:21.155265] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:47.658 [2024-11-20 09:58:21.155409] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:49.048 Initializing NVMe Controllers 00:19:49.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:49.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:49.048 Initialization complete. Launching workers. 00:19:49.048 ======================================================== 00:19:49.048 Latency(us) 00:19:49.048 Device Information : IOPS MiB/s Average min max 00:19:49.048 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3898.00 15.23 256.15 150.97 581.78 00:19:49.048 ======================================================== 00:19:49.048 Total : 3898.00 15.23 256.15 150.97 581.78 00:19:49.048 00:19:49.048 Initializing NVMe Controllers 00:19:49.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:49.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:49.048 Initialization complete. Launching workers. 00:19:49.048 ======================================================== 00:19:49.048 Latency(us) 00:19:49.048 Device Information : IOPS MiB/s Average min max 00:19:49.048 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 63.00 0.25 16423.33 276.39 41887.92 00:19:49.048 ======================================================== 00:19:49.048 Total : 63.00 0.25 16423.33 276.39 41887.92 00:19:49.048 00:19:49.048 Initializing NVMe Controllers 00:19:49.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:49.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:49.048 Initialization complete. Launching workers. 00:19:49.048 ======================================================== 00:19:49.048 Latency(us) 00:19:49.048 Device Information : IOPS MiB/s Average min max 00:19:49.048 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 154.00 0.60 6587.65 239.54 41158.20 00:19:49.048 ======================================================== 00:19:49.048 Total : 154.00 0.60 6587.65 239.54 41158.20 00:19:49.048 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2689262 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2689263 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.048 rmmod nvme_tcp 00:19:49.048 rmmod nvme_fabrics 00:19:49.048 rmmod nvme_keyring 00:19:49.048 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2689015 ']' 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2689015 ']' 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689015' 00:19:49.049 killing process with pid 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2689015 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.049 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.307 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:49.307 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.307 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.307 09:58:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.213 00:19:51.213 real 0m10.801s 00:19:51.213 user 0m7.443s 00:19:51.213 sys 0m5.471s 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:51.213 ************************************ 00:19:51.213 END TEST nvmf_control_msg_list 00:19:51.213 ************************************ 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.213 ************************************ 00:19:51.213 START TEST nvmf_wait_for_buf 00:19:51.213 ************************************ 00:19:51.213 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:51.473 * Looking for test storage... 00:19:51.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:51.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.473 --rc genhtml_branch_coverage=1 00:19:51.473 --rc genhtml_function_coverage=1 00:19:51.473 --rc genhtml_legend=1 00:19:51.473 --rc geninfo_all_blocks=1 00:19:51.473 --rc geninfo_unexecuted_blocks=1 00:19:51.473 00:19:51.473 ' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:51.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.473 --rc genhtml_branch_coverage=1 00:19:51.473 --rc genhtml_function_coverage=1 00:19:51.473 --rc genhtml_legend=1 00:19:51.473 --rc geninfo_all_blocks=1 00:19:51.473 --rc geninfo_unexecuted_blocks=1 00:19:51.473 00:19:51.473 ' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:51.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.473 --rc genhtml_branch_coverage=1 00:19:51.473 --rc genhtml_function_coverage=1 00:19:51.473 --rc genhtml_legend=1 00:19:51.473 --rc geninfo_all_blocks=1 00:19:51.473 --rc geninfo_unexecuted_blocks=1 00:19:51.473 00:19:51.473 ' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:51.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.473 --rc genhtml_branch_coverage=1 00:19:51.473 --rc genhtml_function_coverage=1 00:19:51.473 --rc genhtml_legend=1 00:19:51.473 --rc geninfo_all_blocks=1 00:19:51.473 --rc geninfo_unexecuted_blocks=1 00:19:51.473 00:19:51.473 ' 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.473 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.474 09:58:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.040 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.040 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.041 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.041 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.041 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:19:58.041 00:19:58.041 --- 10.0.0.2 ping statistics --- 00:19:58.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.041 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:58.041 00:19:58.041 --- 10.0.0.1 ping statistics --- 00:19:58.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.041 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2692976 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2692976 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2692976 ']' 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.041 09:58:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.041 [2024-11-20 09:58:30.961979] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:19:58.041 [2024-11-20 09:58:30.962027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.041 [2024-11-20 09:58:31.041712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.041 [2024-11-20 09:58:31.082081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.041 [2024-11-20 09:58:31.082116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.041 [2024-11-20 09:58:31.082122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.041 [2024-11-20 09:58:31.082128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.041 [2024-11-20 09:58:31.082133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.041 [2024-11-20 09:58:31.082673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.041 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.041 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:58.041 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.041 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.041 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 Malloc0 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 [2024-11-20 09:58:31.255663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 [2024-11-20 09:58:31.283873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.042 09:58:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:58.042 [2024-11-20 09:58:31.365155] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:59.420 Initializing NVMe Controllers 00:19:59.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:59.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:59.420 Initialization complete. Launching workers. 00:19:59.420 ======================================================== 00:19:59.420 Latency(us) 00:19:59.420 Device Information : IOPS MiB/s Average min max 00:19:59.420 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.81 7265.78 63845.32 00:19:59.420 ======================================================== 00:19:59.420 Total : 129.00 16.12 32238.81 7265.78 63845.32 00:19:59.420 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.420 rmmod nvme_tcp 00:19:59.420 rmmod nvme_fabrics 00:19:59.420 rmmod nvme_keyring 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2692976 ']' 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2692976 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2692976 ']' 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2692976 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.420 09:58:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2692976 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2692976' 00:19:59.679 killing process with pid 2692976 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2692976 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2692976 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.679 09:58:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.216 00:20:02.216 real 0m10.490s 00:20:02.216 user 0m4.048s 00:20:02.216 sys 0m4.888s 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.216 ************************************ 00:20:02.216 END TEST nvmf_wait_for_buf 00:20:02.216 ************************************ 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.216 09:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.493 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.493 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.494 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.494 ************************************ 00:20:07.494 START TEST nvmf_perf_adq 00:20:07.494 ************************************ 00:20:07.494 09:58:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:07.494 * Looking for test storage... 00:20:07.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.494 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.494 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.494 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:07.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.754 --rc genhtml_branch_coverage=1 00:20:07.754 --rc genhtml_function_coverage=1 00:20:07.754 --rc genhtml_legend=1 00:20:07.754 --rc geninfo_all_blocks=1 00:20:07.754 --rc geninfo_unexecuted_blocks=1 00:20:07.754 00:20:07.754 ' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:07.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.754 --rc genhtml_branch_coverage=1 00:20:07.754 --rc genhtml_function_coverage=1 00:20:07.754 --rc genhtml_legend=1 00:20:07.754 --rc geninfo_all_blocks=1 00:20:07.754 --rc geninfo_unexecuted_blocks=1 00:20:07.754 00:20:07.754 ' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:07.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.754 --rc genhtml_branch_coverage=1 00:20:07.754 --rc genhtml_function_coverage=1 00:20:07.754 --rc genhtml_legend=1 00:20:07.754 --rc geninfo_all_blocks=1 00:20:07.754 --rc geninfo_unexecuted_blocks=1 00:20:07.754 00:20:07.754 ' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:07.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.754 --rc genhtml_branch_coverage=1 00:20:07.754 --rc genhtml_function_coverage=1 00:20:07.754 --rc genhtml_legend=1 00:20:07.754 --rc geninfo_all_blocks=1 00:20:07.754 --rc geninfo_unexecuted_blocks=1 00:20:07.754 00:20:07.754 ' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.754 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.755 09:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.326 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.326 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.327 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.327 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.327 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:14.327 09:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:14.327 09:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:16.862 09:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.164 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.165 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.165 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.165 09:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:20:22.165 00:20:22.165 --- 10.0.0.2 ping statistics --- 00:20:22.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.165 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:20:22.165 00:20:22.165 --- 10.0.0.1 ping statistics --- 00:20:22.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.165 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2701317 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2701317 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2701317 ']' 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.165 09:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.165 [2024-11-20 09:58:55.348051] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:20:22.165 [2024-11-20 09:58:55.348100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.165 [2024-11-20 09:58:55.428056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.165 [2024-11-20 09:58:55.471320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.165 [2024-11-20 09:58:55.471358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.165 [2024-11-20 09:58:55.471366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.165 [2024-11-20 09:58:55.471372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.165 [2024-11-20 09:58:55.471376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.165 [2024-11-20 09:58:55.472892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.165 [2024-11-20 09:58:55.473001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.165 [2024-11-20 09:58:55.473111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.165 [2024-11-20 09:58:55.473112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.786 [2024-11-20 09:58:56.342892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:22.786 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.044 Malloc1 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.044 [2024-11-20 09:58:56.406904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2701424 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:23.044 09:58:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:24.942 "tick_rate": 2100000000, 00:20:24.942 "poll_groups": [ 00:20:24.942 { 00:20:24.942 "name": "nvmf_tgt_poll_group_000", 00:20:24.942 "admin_qpairs": 1, 00:20:24.942 "io_qpairs": 1, 00:20:24.942 "current_admin_qpairs": 1, 00:20:24.942 "current_io_qpairs": 1, 00:20:24.942 "pending_bdev_io": 0, 00:20:24.942 "completed_nvme_io": 18807, 00:20:24.942 "transports": [ 00:20:24.942 { 00:20:24.942 "trtype": "TCP" 00:20:24.942 } 00:20:24.942 ] 00:20:24.942 }, 00:20:24.942 { 00:20:24.942 "name": "nvmf_tgt_poll_group_001", 00:20:24.942 "admin_qpairs": 0, 00:20:24.942 "io_qpairs": 1, 00:20:24.942 "current_admin_qpairs": 0, 00:20:24.942 "current_io_qpairs": 1, 00:20:24.942 "pending_bdev_io": 0, 00:20:24.942 "completed_nvme_io": 19180, 00:20:24.942 "transports": [ 00:20:24.942 { 00:20:24.942 "trtype": "TCP" 00:20:24.942 } 00:20:24.942 ] 00:20:24.942 }, 00:20:24.942 { 00:20:24.942 "name": "nvmf_tgt_poll_group_002", 00:20:24.942 "admin_qpairs": 0, 00:20:24.942 "io_qpairs": 1, 00:20:24.942 "current_admin_qpairs": 0, 00:20:24.942 "current_io_qpairs": 1, 00:20:24.942 "pending_bdev_io": 0, 00:20:24.942 "completed_nvme_io": 18697, 00:20:24.942 "transports": [ 00:20:24.942 { 00:20:24.942 "trtype": "TCP" 00:20:24.942 } 00:20:24.942 ] 00:20:24.942 }, 00:20:24.942 { 00:20:24.942 "name": "nvmf_tgt_poll_group_003", 00:20:24.942 "admin_qpairs": 0, 00:20:24.942 "io_qpairs": 1, 00:20:24.942 "current_admin_qpairs": 0, 00:20:24.942 "current_io_qpairs": 1, 00:20:24.942 "pending_bdev_io": 0, 00:20:24.942 "completed_nvme_io": 18842, 00:20:24.942 "transports": [ 00:20:24.942 { 00:20:24.942 "trtype": "TCP" 00:20:24.942 } 00:20:24.942 ] 00:20:24.942 } 00:20:24.942 ] 00:20:24.942 }' 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:24.942 09:58:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2701424 00:20:33.099 Initializing NVMe Controllers 00:20:33.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.099 Initialization complete. Launching workers. 00:20:33.099 ======================================================== 00:20:33.099 Latency(us) 00:20:33.099 Device Information : IOPS MiB/s Average min max 00:20:33.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10490.03 40.98 6102.56 2118.67 11770.62 00:20:33.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10694.73 41.78 5985.78 2184.83 10800.21 00:20:33.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10378.23 40.54 6167.14 2191.04 10949.57 00:20:33.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10532.13 41.14 6076.56 2280.95 11057.64 00:20:33.099 ======================================================== 00:20:33.099 Total : 42095.13 164.43 6082.31 2118.67 11770.62 00:20:33.099 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.099 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.099 rmmod nvme_tcp 00:20:33.358 rmmod nvme_fabrics 00:20:33.358 rmmod nvme_keyring 00:20:33.358 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.358 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:33.358 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:33.358 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2701317 ']' 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2701317 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2701317 ']' 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2701317 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701317 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701317' 00:20:33.359 killing process with pid 2701317 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2701317 00:20:33.359 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2701317 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.618 09:59:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.524 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.524 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:35.524 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:35.524 09:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:36.901 09:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:38.804 09:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.077 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.078 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.078 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:20:44.078 00:20:44.078 --- 10.0.0.2 ping statistics --- 00:20:44.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.078 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:20:44.078 00:20:44.078 --- 10.0.0.1 ping statistics --- 00:20:44.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.078 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:44.078 net.core.busy_poll = 1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:44.078 net.core.busy_read = 1 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:44.078 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2705222 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2705222 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2705222 ']' 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.338 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.338 [2024-11-20 09:59:17.894362] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:20:44.338 [2024-11-20 09:59:17.894419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.597 [2024-11-20 09:59:17.976741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.597 [2024-11-20 09:59:18.017646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.597 [2024-11-20 09:59:18.017686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.597 [2024-11-20 09:59:18.017693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.597 [2024-11-20 09:59:18.017699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.597 [2024-11-20 09:59:18.017704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.597 [2024-11-20 09:59:18.019296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.597 [2024-11-20 09:59:18.019406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.597 [2024-11-20 09:59:18.019519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.597 [2024-11-20 09:59:18.019520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.597 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 [2024-11-20 09:59:18.228058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.855 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.856 Malloc1 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.856 [2024-11-20 09:59:18.296488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2705446 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:44.856 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:46.764 "tick_rate": 2100000000, 00:20:46.764 "poll_groups": [ 00:20:46.764 { 00:20:46.764 "name": "nvmf_tgt_poll_group_000", 00:20:46.764 "admin_qpairs": 1, 00:20:46.764 "io_qpairs": 1, 00:20:46.764 "current_admin_qpairs": 1, 00:20:46.764 "current_io_qpairs": 1, 00:20:46.764 "pending_bdev_io": 0, 00:20:46.764 "completed_nvme_io": 27521, 00:20:46.764 "transports": [ 00:20:46.764 { 00:20:46.764 "trtype": "TCP" 00:20:46.764 } 00:20:46.764 ] 00:20:46.764 }, 00:20:46.764 { 00:20:46.764 "name": "nvmf_tgt_poll_group_001", 00:20:46.764 "admin_qpairs": 0, 00:20:46.764 "io_qpairs": 3, 00:20:46.764 "current_admin_qpairs": 0, 00:20:46.764 "current_io_qpairs": 3, 00:20:46.764 "pending_bdev_io": 0, 00:20:46.764 "completed_nvme_io": 28313, 00:20:46.764 "transports": [ 00:20:46.764 { 00:20:46.764 "trtype": "TCP" 00:20:46.764 } 00:20:46.764 ] 00:20:46.764 }, 00:20:46.764 { 00:20:46.764 "name": "nvmf_tgt_poll_group_002", 00:20:46.764 "admin_qpairs": 0, 00:20:46.764 "io_qpairs": 0, 00:20:46.764 "current_admin_qpairs": 0, 00:20:46.764 "current_io_qpairs": 0, 00:20:46.764 "pending_bdev_io": 0, 00:20:46.764 "completed_nvme_io": 0, 00:20:46.764 "transports": [ 00:20:46.764 { 00:20:46.764 "trtype": "TCP" 00:20:46.764 } 00:20:46.764 ] 00:20:46.764 }, 00:20:46.764 { 00:20:46.764 "name": "nvmf_tgt_poll_group_003", 00:20:46.764 "admin_qpairs": 0, 00:20:46.764 "io_qpairs": 0, 00:20:46.764 "current_admin_qpairs": 0, 00:20:46.764 "current_io_qpairs": 0, 00:20:46.764 "pending_bdev_io": 0, 00:20:46.764 "completed_nvme_io": 0, 00:20:46.764 "transports": [ 00:20:46.764 { 00:20:46.764 "trtype": "TCP" 00:20:46.764 } 00:20:46.764 ] 00:20:46.764 } 00:20:46.764 ] 00:20:46.764 }' 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:46.764 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:47.021 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:47.021 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:47.021 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2705446 00:20:55.126 Initializing NVMe Controllers 00:20:55.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.126 Initialization complete. Launching workers. 00:20:55.126 ======================================================== 00:20:55.126 Latency(us) 00:20:55.126 Device Information : IOPS MiB/s Average min max 00:20:55.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5017.50 19.60 12782.87 1576.02 58253.61 00:20:55.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5108.70 19.96 12566.25 1887.29 58418.71 00:20:55.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4948.20 19.33 12962.04 1788.29 60572.35 00:20:55.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15447.90 60.34 4142.51 1523.76 6966.92 00:20:55.126 ======================================================== 00:20:55.126 Total : 30522.29 119.23 8402.61 1523.76 60572.35 00:20:55.126 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.126 rmmod nvme_tcp 00:20:55.126 rmmod nvme_fabrics 00:20:55.126 rmmod nvme_keyring 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2705222 ']' 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2705222 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2705222 ']' 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2705222 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705222 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705222' 00:20:55.126 killing process with pid 2705222 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2705222 00:20:55.126 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2705222 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.385 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.921 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.921 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:57.921 00:20:57.921 real 0m49.947s 00:20:57.921 user 2m47.362s 00:20:57.921 sys 0m10.441s 00:20:57.921 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.922 ************************************ 00:20:57.922 END TEST nvmf_perf_adq 00:20:57.922 ************************************ 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.922 ************************************ 00:20:57.922 START TEST nvmf_shutdown 00:20:57.922 ************************************ 00:20:57.922 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:57.922 * Looking for test storage... 00:20:57.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.922 --rc genhtml_branch_coverage=1 00:20:57.922 --rc genhtml_function_coverage=1 00:20:57.922 --rc genhtml_legend=1 00:20:57.922 --rc geninfo_all_blocks=1 00:20:57.922 --rc geninfo_unexecuted_blocks=1 00:20:57.922 00:20:57.922 ' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.922 --rc genhtml_branch_coverage=1 00:20:57.922 --rc genhtml_function_coverage=1 00:20:57.922 --rc genhtml_legend=1 00:20:57.922 --rc geninfo_all_blocks=1 00:20:57.922 --rc geninfo_unexecuted_blocks=1 00:20:57.922 00:20:57.922 ' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.922 --rc genhtml_branch_coverage=1 00:20:57.922 --rc genhtml_function_coverage=1 00:20:57.922 --rc genhtml_legend=1 00:20:57.922 --rc geninfo_all_blocks=1 00:20:57.922 --rc geninfo_unexecuted_blocks=1 00:20:57.922 00:20:57.922 ' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.922 --rc genhtml_branch_coverage=1 00:20:57.922 --rc genhtml_function_coverage=1 00:20:57.922 --rc genhtml_legend=1 00:20:57.922 --rc geninfo_all_blocks=1 00:20:57.922 --rc geninfo_unexecuted_blocks=1 00:20:57.922 00:20:57.922 ' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:57.922 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.923 ************************************ 00:20:57.923 START TEST nvmf_shutdown_tc1 00:20:57.923 ************************************ 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.923 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.495 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.496 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.496 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.496 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:21:04.496 00:21:04.496 --- 10.0.0.2 ping statistics --- 00:21:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.496 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:21:04.496 00:21:04.496 --- 10.0.0.1 ping statistics --- 00:21:04.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.496 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2710678 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2710678 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2710678 ']' 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.496 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.497 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.497 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.497 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.497 [2024-11-20 09:59:37.316904] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:04.497 [2024-11-20 09:59:37.316951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.497 [2024-11-20 09:59:37.396637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.497 [2024-11-20 09:59:37.435946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.497 [2024-11-20 09:59:37.435984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.497 [2024-11-20 09:59:37.435991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.497 [2024-11-20 09:59:37.435997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.497 [2024-11-20 09:59:37.436001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.497 [2024-11-20 09:59:37.437574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.497 [2024-11-20 09:59:37.437681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.497 [2024-11-20 09:59:37.437764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.497 [2024-11-20 09:59:37.437765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.754 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.755 [2024-11-20 09:59:38.204228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.755 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.755 Malloc1 00:21:04.755 [2024-11-20 09:59:38.317497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.011 Malloc2 00:21:05.011 Malloc3 00:21:05.011 Malloc4 00:21:05.012 Malloc5 00:21:05.012 Malloc6 00:21:05.012 Malloc7 00:21:05.269 Malloc8 00:21:05.269 Malloc9 00:21:05.269 Malloc10 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2710955 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2710955 /var/tmp/bdevperf.sock 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2710955 ']' 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.269 { 00:21:05.269 "params": { 00:21:05.269 "name": "Nvme$subsystem", 00:21:05.269 "trtype": "$TEST_TRANSPORT", 00:21:05.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.269 "adrfam": "ipv4", 00:21:05.269 "trsvcid": "$NVMF_PORT", 00:21:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.269 "hdgst": ${hdgst:-false}, 00:21:05.269 "ddgst": ${ddgst:-false} 00:21:05.269 }, 00:21:05.269 "method": "bdev_nvme_attach_controller" 00:21:05.269 } 00:21:05.269 EOF 00:21:05.269 )") 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.269 { 00:21:05.269 "params": { 00:21:05.269 "name": "Nvme$subsystem", 00:21:05.269 "trtype": "$TEST_TRANSPORT", 00:21:05.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.269 "adrfam": "ipv4", 00:21:05.269 "trsvcid": "$NVMF_PORT", 00:21:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.269 "hdgst": ${hdgst:-false}, 00:21:05.269 "ddgst": ${ddgst:-false} 00:21:05.269 }, 00:21:05.269 "method": "bdev_nvme_attach_controller" 00:21:05.269 } 00:21:05.269 EOF 00:21:05.269 )") 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.269 { 00:21:05.269 "params": { 00:21:05.269 "name": "Nvme$subsystem", 00:21:05.269 "trtype": "$TEST_TRANSPORT", 00:21:05.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.269 "adrfam": "ipv4", 00:21:05.269 "trsvcid": "$NVMF_PORT", 00:21:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.269 "hdgst": ${hdgst:-false}, 00:21:05.269 "ddgst": ${ddgst:-false} 00:21:05.269 }, 00:21:05.269 "method": "bdev_nvme_attach_controller" 00:21:05.269 } 00:21:05.269 EOF 00:21:05.269 )") 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.269 { 00:21:05.269 "params": { 00:21:05.269 "name": "Nvme$subsystem", 00:21:05.269 "trtype": "$TEST_TRANSPORT", 00:21:05.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.269 "adrfam": "ipv4", 00:21:05.269 "trsvcid": "$NVMF_PORT", 00:21:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.269 "hdgst": ${hdgst:-false}, 00:21:05.269 "ddgst": ${ddgst:-false} 00:21:05.269 }, 00:21:05.269 "method": "bdev_nvme_attach_controller" 00:21:05.269 } 00:21:05.269 EOF 00:21:05.269 )") 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.269 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 [2024-11-20 09:59:38.791463] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:05.270 [2024-11-20 09:59:38.791513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.270 { 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme$subsystem", 00:21:05.270 "trtype": "$TEST_TRANSPORT", 00:21:05.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "$NVMF_PORT", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.270 "hdgst": ${hdgst:-false}, 00:21:05.270 "ddgst": ${ddgst:-false} 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 } 00:21:05.270 EOF 00:21:05.270 )") 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:05.270 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme1", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme2", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme3", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme4", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme5", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme6", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme7", 00:21:05.270 "trtype": "tcp", 00:21:05.270 "traddr": "10.0.0.2", 00:21:05.270 "adrfam": "ipv4", 00:21:05.270 "trsvcid": "4420", 00:21:05.270 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.270 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.270 "hdgst": false, 00:21:05.270 "ddgst": false 00:21:05.270 }, 00:21:05.270 "method": "bdev_nvme_attach_controller" 00:21:05.270 },{ 00:21:05.270 "params": { 00:21:05.270 "name": "Nvme8", 00:21:05.271 "trtype": "tcp", 00:21:05.271 "traddr": "10.0.0.2", 00:21:05.271 "adrfam": "ipv4", 00:21:05.271 "trsvcid": "4420", 00:21:05.271 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.271 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.271 "hdgst": false, 00:21:05.271 "ddgst": false 00:21:05.271 }, 00:21:05.271 "method": "bdev_nvme_attach_controller" 00:21:05.271 },{ 00:21:05.271 "params": { 00:21:05.271 "name": "Nvme9", 00:21:05.271 "trtype": "tcp", 00:21:05.271 "traddr": "10.0.0.2", 00:21:05.271 "adrfam": "ipv4", 00:21:05.271 "trsvcid": "4420", 00:21:05.271 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.271 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.271 "hdgst": false, 00:21:05.271 "ddgst": false 00:21:05.271 }, 00:21:05.271 "method": "bdev_nvme_attach_controller" 00:21:05.271 },{ 00:21:05.271 "params": { 00:21:05.271 "name": "Nvme10", 00:21:05.271 "trtype": "tcp", 00:21:05.271 "traddr": "10.0.0.2", 00:21:05.271 "adrfam": "ipv4", 00:21:05.271 "trsvcid": "4420", 00:21:05.271 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.271 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.271 "hdgst": false, 00:21:05.271 "ddgst": false 00:21:05.271 }, 00:21:05.271 "method": "bdev_nvme_attach_controller" 00:21:05.271 }' 00:21:05.528 [2024-11-20 09:59:38.867306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.528 [2024-11-20 09:59:38.908055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2710955 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:06.896 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:07.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2710955 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2710678 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.826 { 00:21:07.826 "params": { 00:21:07.826 "name": "Nvme$subsystem", 00:21:07.826 "trtype": "$TEST_TRANSPORT", 00:21:07.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.826 "adrfam": "ipv4", 00:21:07.826 "trsvcid": "$NVMF_PORT", 00:21:07.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.826 "hdgst": ${hdgst:-false}, 00:21:07.826 "ddgst": ${ddgst:-false} 00:21:07.826 }, 00:21:07.826 "method": "bdev_nvme_attach_controller" 00:21:07.826 } 00:21:07.826 EOF 00:21:07.826 )") 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.826 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.826 { 00:21:07.826 "params": { 00:21:07.826 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.827 { 00:21:07.827 "params": { 00:21:07.827 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.827 { 00:21:07.827 "params": { 00:21:07.827 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.827 { 00:21:07.827 "params": { 00:21:07.827 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.827 { 00:21:07.827 "params": { 00:21:07.827 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.827 { 00:21:07.827 "params": { 00:21:07.827 "name": "Nvme$subsystem", 00:21:07.827 "trtype": "$TEST_TRANSPORT", 00:21:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.827 "adrfam": "ipv4", 00:21:07.827 "trsvcid": "$NVMF_PORT", 00:21:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.827 "hdgst": ${hdgst:-false}, 00:21:07.827 "ddgst": ${ddgst:-false} 00:21:07.827 }, 00:21:07.827 "method": "bdev_nvme_attach_controller" 00:21:07.827 } 00:21:07.827 EOF 00:21:07.827 )") 00:21:07.827 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.827 [2024-11-20 09:59:41.284230] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:07.827 [2024-11-20 09:59:41.284281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711435 ] 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.828 { 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme$subsystem", 00:21:07.828 "trtype": "$TEST_TRANSPORT", 00:21:07.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "$NVMF_PORT", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.828 "hdgst": ${hdgst:-false}, 00:21:07.828 "ddgst": ${ddgst:-false} 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 } 00:21:07.828 EOF 00:21:07.828 )") 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.828 { 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme$subsystem", 00:21:07.828 "trtype": "$TEST_TRANSPORT", 00:21:07.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "$NVMF_PORT", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.828 "hdgst": ${hdgst:-false}, 00:21:07.828 "ddgst": ${ddgst:-false} 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 } 00:21:07.828 EOF 00:21:07.828 )") 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.828 { 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme$subsystem", 00:21:07.828 "trtype": "$TEST_TRANSPORT", 00:21:07.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "$NVMF_PORT", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.828 "hdgst": ${hdgst:-false}, 00:21:07.828 "ddgst": ${ddgst:-false} 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 } 00:21:07.828 EOF 00:21:07.828 )") 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:07.828 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme1", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme2", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme3", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme4", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme5", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme6", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme7", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.828 "adrfam": "ipv4", 00:21:07.828 "trsvcid": "4420", 00:21:07.828 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:07.828 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:07.828 "hdgst": false, 00:21:07.828 "ddgst": false 00:21:07.828 }, 00:21:07.828 "method": "bdev_nvme_attach_controller" 00:21:07.828 },{ 00:21:07.828 "params": { 00:21:07.828 "name": "Nvme8", 00:21:07.828 "trtype": "tcp", 00:21:07.828 "traddr": "10.0.0.2", 00:21:07.829 "adrfam": "ipv4", 00:21:07.829 "trsvcid": "4420", 00:21:07.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:07.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:07.829 "hdgst": false, 00:21:07.829 "ddgst": false 00:21:07.829 }, 00:21:07.829 "method": "bdev_nvme_attach_controller" 00:21:07.829 },{ 00:21:07.829 "params": { 00:21:07.829 "name": "Nvme9", 00:21:07.829 "trtype": "tcp", 00:21:07.829 "traddr": "10.0.0.2", 00:21:07.829 "adrfam": "ipv4", 00:21:07.829 "trsvcid": "4420", 00:21:07.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:07.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:07.829 "hdgst": false, 00:21:07.829 "ddgst": false 00:21:07.829 }, 00:21:07.829 "method": "bdev_nvme_attach_controller" 00:21:07.829 },{ 00:21:07.829 "params": { 00:21:07.829 "name": "Nvme10", 00:21:07.829 "trtype": "tcp", 00:21:07.829 "traddr": "10.0.0.2", 00:21:07.829 "adrfam": "ipv4", 00:21:07.829 "trsvcid": "4420", 00:21:07.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:07.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:07.829 "hdgst": false, 00:21:07.829 "ddgst": false 00:21:07.829 }, 00:21:07.829 "method": "bdev_nvme_attach_controller" 00:21:07.829 }' 00:21:07.829 [2024-11-20 09:59:41.360921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.829 [2024-11-20 09:59:41.401764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.721 Running I/O for 1 seconds... 00:21:10.652 2261.00 IOPS, 141.31 MiB/s 00:21:10.652 Latency(us) 00:21:10.652 [2024-11-20T08:59:44.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.652 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme1n1 : 1.03 253.71 15.86 0.00 0.00 248736.16 4181.82 212711.13 00:21:10.652 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme2n1 : 1.03 251.35 15.71 0.00 0.00 246266.19 2449.80 223696.21 00:21:10.652 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme3n1 : 1.11 289.59 18.10 0.00 0.00 212513.55 13232.03 219701.64 00:21:10.652 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme4n1 : 1.08 302.17 18.89 0.00 0.00 196459.91 15416.56 200727.41 00:21:10.652 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme5n1 : 1.11 292.82 18.30 0.00 0.00 203525.24 10173.68 211712.49 00:21:10.652 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme6n1 : 1.11 287.02 17.94 0.00 0.00 205248.80 17351.44 211712.49 00:21:10.652 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme7n1 : 1.12 286.29 17.89 0.00 0.00 202651.45 15541.39 226692.14 00:21:10.652 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme8n1 : 1.12 284.75 17.80 0.00 0.00 200863.16 13668.94 217704.35 00:21:10.652 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme9n1 : 1.15 279.45 17.47 0.00 0.00 201952.30 12857.54 239674.51 00:21:10.652 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.652 Verification LBA range: start 0x0 length 0x400 00:21:10.652 Nvme10n1 : 1.16 331.49 20.72 0.00 0.00 168112.36 4337.86 218702.99 00:21:10.652 [2024-11-20T08:59:44.234Z] =================================================================================================================== 00:21:10.652 [2024-11-20T08:59:44.234Z] Total : 2858.65 178.67 0.00 0.00 206328.23 2449.80 239674.51 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.909 rmmod nvme_tcp 00:21:10.909 rmmod nvme_fabrics 00:21:10.909 rmmod nvme_keyring 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2710678 ']' 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2710678 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2710678 ']' 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2710678 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.909 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710678 00:21:10.910 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.910 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.910 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710678' 00:21:10.910 killing process with pid 2710678 00:21:10.910 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2710678 00:21:10.910 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2710678 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.477 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.384 00:21:13.384 real 0m15.652s 00:21:13.384 user 0m35.192s 00:21:13.384 sys 0m5.875s 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.384 ************************************ 00:21:13.384 END TEST nvmf_shutdown_tc1 00:21:13.384 ************************************ 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.384 ************************************ 00:21:13.384 START TEST nvmf_shutdown_tc2 00:21:13.384 ************************************ 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:13.384 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.385 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.645 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.646 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.646 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.646 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:21:13.646 00:21:13.646 --- 10.0.0.2 ping statistics --- 00:21:13.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.646 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:21:13.646 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:21:13.905 00:21:13.905 --- 10.0.0.1 ping statistics --- 00:21:13.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.905 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2712462 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2712462 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2712462 ']' 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.905 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.906 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.906 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.906 [2024-11-20 09:59:47.329749] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:13.906 [2024-11-20 09:59:47.329799] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.906 [2024-11-20 09:59:47.407347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.906 [2024-11-20 09:59:47.450000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.906 [2024-11-20 09:59:47.450034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.906 [2024-11-20 09:59:47.450042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.906 [2024-11-20 09:59:47.450048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.906 [2024-11-20 09:59:47.450054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.906 [2024-11-20 09:59:47.451573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.906 [2024-11-20 09:59:47.451681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.906 [2024-11-20 09:59:47.451765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.906 [2024-11-20 09:59:47.451766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.839 [2024-11-20 09:59:48.214435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.839 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.839 Malloc1 00:21:14.839 [2024-11-20 09:59:48.322875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.839 Malloc2 00:21:14.839 Malloc3 00:21:15.097 Malloc4 00:21:15.097 Malloc5 00:21:15.097 Malloc6 00:21:15.097 Malloc7 00:21:15.097 Malloc8 00:21:15.097 Malloc9 00:21:15.357 Malloc10 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2712745 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2712745 /var/tmp/bdevperf.sock 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2712745 ']' 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.357 "params": { 00:21:15.357 "name": "Nvme$subsystem", 00:21:15.357 "trtype": "$TEST_TRANSPORT", 00:21:15.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.357 "adrfam": "ipv4", 00:21:15.357 "trsvcid": "$NVMF_PORT", 00:21:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.357 "hdgst": ${hdgst:-false}, 00:21:15.357 "ddgst": ${ddgst:-false} 00:21:15.357 }, 00:21:15.357 "method": "bdev_nvme_attach_controller" 00:21:15.357 } 00:21:15.357 EOF 00:21:15.357 )") 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.357 [2024-11-20 09:59:48.798159] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:15.357 [2024-11-20 09:59:48.798222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712745 ] 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.357 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.357 { 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme$subsystem", 00:21:15.358 "trtype": "$TEST_TRANSPORT", 00:21:15.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "$NVMF_PORT", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.358 "hdgst": ${hdgst:-false}, 00:21:15.358 "ddgst": ${ddgst:-false} 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 } 00:21:15.358 EOF 00:21:15.358 )") 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.358 { 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme$subsystem", 00:21:15.358 "trtype": "$TEST_TRANSPORT", 00:21:15.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "$NVMF_PORT", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.358 "hdgst": ${hdgst:-false}, 00:21:15.358 "ddgst": ${ddgst:-false} 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 } 00:21:15.358 EOF 00:21:15.358 )") 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:15.358 { 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme$subsystem", 00:21:15.358 "trtype": "$TEST_TRANSPORT", 00:21:15.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "$NVMF_PORT", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.358 "hdgst": ${hdgst:-false}, 00:21:15.358 "ddgst": ${ddgst:-false} 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 } 00:21:15.358 EOF 00:21:15.358 )") 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:15.358 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme1", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme2", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme3", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme4", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme5", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme6", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme7", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme8", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme9", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 },{ 00:21:15.358 "params": { 00:21:15.358 "name": "Nvme10", 00:21:15.358 "trtype": "tcp", 00:21:15.358 "traddr": "10.0.0.2", 00:21:15.358 "adrfam": "ipv4", 00:21:15.358 "trsvcid": "4420", 00:21:15.358 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.358 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.358 "hdgst": false, 00:21:15.358 "ddgst": false 00:21:15.358 }, 00:21:15.358 "method": "bdev_nvme_attach_controller" 00:21:15.358 }' 00:21:15.358 [2024-11-20 09:59:48.874247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.358 [2024-11-20 09:59:48.914904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.255 Running I/O for 10 seconds... 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:17.255 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:17.515 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:17.515 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:17.515 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.515 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.515 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2712745 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2712745 ']' 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2712745 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.516 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712745 00:21:17.773 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.773 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.773 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712745' 00:21:17.773 killing process with pid 2712745 00:21:17.773 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2712745 00:21:17.773 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2712745 00:21:17.773 Received shutdown signal, test time was about 0.777364 seconds 00:21:17.773 00:21:17.773 Latency(us) 00:21:17.773 [2024-11-20T08:59:51.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.773 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme1n1 : 0.78 329.60 20.60 0.00 0.00 191047.19 13169.62 184749.10 00:21:17.773 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme2n1 : 0.75 262.71 16.42 0.00 0.00 234001.89 5118.05 201726.05 00:21:17.773 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme3n1 : 0.77 331.25 20.70 0.00 0.00 183033.42 21595.67 208716.56 00:21:17.773 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme4n1 : 0.77 336.06 21.00 0.00 0.00 176238.79 4213.03 213709.78 00:21:17.773 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme5n1 : 0.77 250.42 15.65 0.00 0.00 232014.99 18974.23 228689.43 00:21:17.773 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme6n1 : 0.74 257.85 16.12 0.00 0.00 219534.55 20597.03 230686.72 00:21:17.773 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme7n1 : 0.75 255.36 15.96 0.00 0.00 216630.94 24217.11 203723.34 00:21:17.773 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme8n1 : 0.75 254.30 15.89 0.00 0.00 212504.06 13232.03 217704.35 00:21:17.773 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme9n1 : 0.76 251.21 15.70 0.00 0.00 210623.15 19348.72 217704.35 00:21:17.773 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.773 Verification LBA range: start 0x0 length 0x400 00:21:17.773 Nvme10n1 : 0.77 255.31 15.96 0.00 0.00 200560.54 6116.69 231685.36 00:21:17.773 [2024-11-20T08:59:51.355Z] =================================================================================================================== 00:21:17.773 [2024-11-20T08:59:51.355Z] Total : 2784.07 174.00 0.00 0.00 205410.56 4213.03 231685.36 00:21:18.031 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.964 rmmod nvme_tcp 00:21:18.964 rmmod nvme_fabrics 00:21:18.964 rmmod nvme_keyring 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2712462 ']' 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2712462 ']' 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712462' 00:21:18.964 killing process with pid 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2712462 00:21:18.964 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2712462 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.531 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.437 00:21:21.437 real 0m7.977s 00:21:21.437 user 0m24.209s 00:21:21.437 sys 0m1.334s 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.437 ************************************ 00:21:21.437 END TEST nvmf_shutdown_tc2 00:21:21.437 ************************************ 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.437 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.437 ************************************ 00:21:21.437 START TEST nvmf_shutdown_tc3 00:21:21.437 ************************************ 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.437 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.698 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.698 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.698 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:21.699 00:21:21.699 --- 10.0.0.2 ping statistics --- 00:21:21.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.699 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:21.699 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:21.958 00:21:21.958 --- 10.0.0.1 ping statistics --- 00:21:21.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.958 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2713979 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2713979 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2713979 ']' 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.958 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.958 [2024-11-20 09:59:55.369721] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:21.958 [2024-11-20 09:59:55.369769] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.958 [2024-11-20 09:59:55.448489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.958 [2024-11-20 09:59:55.488032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.958 [2024-11-20 09:59:55.488071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.958 [2024-11-20 09:59:55.488079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.958 [2024-11-20 09:59:55.488088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.958 [2024-11-20 09:59:55.488093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.958 [2024-11-20 09:59:55.489704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.958 [2024-11-20 09:59:55.489808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.958 [2024-11-20 09:59:55.489896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.958 [2024-11-20 09:59:55.489897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.892 [2024-11-20 09:59:56.243478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.892 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.892 Malloc1 00:21:22.892 [2024-11-20 09:59:56.355367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.892 Malloc2 00:21:22.892 Malloc3 00:21:22.892 Malloc4 00:21:23.151 Malloc5 00:21:23.151 Malloc6 00:21:23.151 Malloc7 00:21:23.151 Malloc8 00:21:23.151 Malloc9 00:21:23.409 Malloc10 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2714290 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2714290 /var/tmp/bdevperf.sock 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2714290 ']' 00:21:23.409 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 [2024-11-20 09:59:56.836360] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:23.410 [2024-11-20 09:59:56.836422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714290 ] 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.410 "adrfam": "ipv4", 00:21:23.410 "trsvcid": "$NVMF_PORT", 00:21:23.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.410 "hdgst": ${hdgst:-false}, 00:21:23.410 "ddgst": ${ddgst:-false} 00:21:23.410 }, 00:21:23.410 "method": "bdev_nvme_attach_controller" 00:21:23.410 } 00:21:23.410 EOF 00:21:23.410 )") 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.410 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.410 { 00:21:23.410 "params": { 00:21:23.410 "name": "Nvme$subsystem", 00:21:23.410 "trtype": "$TEST_TRANSPORT", 00:21:23.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "$NVMF_PORT", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.411 "hdgst": ${hdgst:-false}, 00:21:23.411 "ddgst": ${ddgst:-false} 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 } 00:21:23.411 EOF 00:21:23.411 )") 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.411 { 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme$subsystem", 00:21:23.411 "trtype": "$TEST_TRANSPORT", 00:21:23.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "$NVMF_PORT", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.411 "hdgst": ${hdgst:-false}, 00:21:23.411 "ddgst": ${ddgst:-false} 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 } 00:21:23.411 EOF 00:21:23.411 )") 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:23.411 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme1", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme2", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme3", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme4", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme5", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme6", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme7", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme8", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme9", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 },{ 00:21:23.411 "params": { 00:21:23.411 "name": "Nvme10", 00:21:23.411 "trtype": "tcp", 00:21:23.411 "traddr": "10.0.0.2", 00:21:23.411 "adrfam": "ipv4", 00:21:23.411 "trsvcid": "4420", 00:21:23.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:23.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:23.411 "hdgst": false, 00:21:23.411 "ddgst": false 00:21:23.411 }, 00:21:23.411 "method": "bdev_nvme_attach_controller" 00:21:23.411 }' 00:21:23.411 [2024-11-20 09:59:56.911795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.411 [2024-11-20 09:59:56.952754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.421 Running I/O for 10 seconds... 00:21:25.421 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.421 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:25.421 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:25.421 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.421 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:25.684 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:25.943 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2713979 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2713979 ']' 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2713979 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713979 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713979' 00:21:26.205 killing process with pid 2713979 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2713979 00:21:26.205 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2713979 00:21:26.205 [2024-11-20 09:59:59.767746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.205 [2024-11-20 09:59:59.767918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.767999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.768228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b700 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.770058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bbf0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.770091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bbf0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.770100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bbf0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.770107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202bbf0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.206 [2024-11-20 09:59:59.771392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.771597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c0c0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.772997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.207 [2024-11-20 09:59:59.773053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c5b0 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.773998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.774199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c930 is same with the state(6) to be set 00:21:26.208 [2024-11-20 09:59:59.775030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.775437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ce00 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.209 [2024-11-20 09:59:59.776619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.776877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d2d0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.210 [2024-11-20 09:59:59.777838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.777993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202d7c0 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.211 [2024-11-20 09:59:59.778792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3290 is same with the state(6) to be set 00:21:26.482 [2024-11-20 09:59:59.783834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22868c0 is same with the state(6) to be set 00:21:26.482 [2024-11-20 09:59:59.783960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.783984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.783993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.784007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.784014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.784021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b640 is same with the state(6) to be set 00:21:26.482 [2024-11-20 09:59:59.784045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.482 [2024-11-20 09:59:59.784054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.482 [2024-11-20 09:59:59.784062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a610 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c3a60 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286370 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b39b0 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2291a90 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e661b0 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65d50 is same with the state(6) to be set 00:21:26.483 [2024-11-20 09:59:59.784649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.483 [2024-11-20 09:59:59.784667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.483 [2024-11-20 09:59:59.784675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.784683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.484 [2024-11-20 09:59:59.784689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.784697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.484 [2024-11-20 09:59:59.784703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.784710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22be460 is same with the state(6) to be set 00:21:26.484 [2024-11-20 09:59:59.785102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.484 [2024-11-20 09:59:59.785675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.484 [2024-11-20 09:59:59.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.785989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.785997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:26.485 [2024-11-20 09:59:59.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.485 [2024-11-20 09:59:59.786816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.485 [2024-11-20 09:59:59.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.786992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.786999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.787181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.787189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.486 [2024-11-20 09:59:59.795479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.486 [2024-11-20 09:59:59.795491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.795981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.487 [2024-11-20 09:59:59.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.487 [2024-11-20 09:59:59.796122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22868c0 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b640 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7a610 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c3a60 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2286370 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b39b0 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2291a90 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e661b0 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65d50 (9): Bad file descriptor 00:21:26.487 [2024-11-20 09:59:59.796315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be460 (9): Bad file descriptor 00:21:26.488 [2024-11-20 09:59:59.798033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.488 [2024-11-20 09:59:59.798735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.488 [2024-11-20 09:59:59.798746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.799412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.489 [2024-11-20 09:59:59.799422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.489 [2024-11-20 09:59:59.800687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:26.489 [2024-11-20 09:59:59.802427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:26.489 [2024-11-20 09:59:59.802642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.489 [2024-11-20 09:59:59.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65d50 with addr=10.0.0.2, port=4420 00:21:26.489 [2024-11-20 09:59:59.802676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65d50 is same with the state(6) to be set 00:21:26.489 [2024-11-20 09:59:59.803654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:26.489 [2024-11-20 09:59:59.803785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.489 [2024-11-20 09:59:59.803806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b39b0 with addr=10.0.0.2, port=4420 00:21:26.489 [2024-11-20 09:59:59.803818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b39b0 is same with the state(6) to be set 00:21:26.489 [2024-11-20 09:59:59.803833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65d50 (9): Bad file descriptor 00:21:26.489 [2024-11-20 09:59:59.803889] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.803960] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804017] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804068] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804121] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804175] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804247] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:26.489 [2024-11-20 09:59:59.804618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.490 [2024-11-20 09:59:59.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c3a60 with addr=10.0.0.2, port=4420 00:21:26.490 [2024-11-20 09:59:59.804650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c3a60 is same with the state(6) to be set 00:21:26.490 [2024-11-20 09:59:59.804662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b39b0 (9): Bad file descriptor 00:21:26.490 [2024-11-20 09:59:59.804673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:26.490 [2024-11-20 09:59:59.804682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:26.490 [2024-11-20 09:59:59.804691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:26.490 [2024-11-20 09:59:59.804702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:26.490 [2024-11-20 09:59:59.804764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.804987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.490 [2024-11-20 09:59:59.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.490 [2024-11-20 09:59:59.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.491 [2024-11-20 09:59:59.805943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.491 [2024-11-20 09:59:59.805953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226a560 is same with the state(6) to be set 00:21:26.491 [2024-11-20 09:59:59.806084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c3a60 (9): Bad file descriptor 00:21:26.491 [2024-11-20 09:59:59.806097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:26.492 [2024-11-20 09:59:59.806105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:26.492 [2024-11-20 09:59:59.806113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:26.492 [2024-11-20 09:59:59.806121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:26.492 [2024-11-20 09:59:59.807194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:26.492 [2024-11-20 09:59:59.807223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:26.492 [2024-11-20 09:59:59.807232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:26.492 [2024-11-20 09:59:59.807240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:26.492 [2024-11-20 09:59:59.807249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:26.492 [2024-11-20 09:59:59.807568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.492 [2024-11-20 09:59:59.807586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22868c0 with addr=10.0.0.2, port=4420 00:21:26.492 [2024-11-20 09:59:59.807595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22868c0 is same with the state(6) to be set 00:21:26.492 [2024-11-20 09:59:59.807647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.807991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.807999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.492 [2024-11-20 09:59:59.808143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.492 [2024-11-20 09:59:59.808154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.808836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206a450 is same with the state(6) to be set 00:21:26.493 [2024-11-20 09:59:59.809942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.809961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.493 [2024-11-20 09:59:59.809982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.493 [2024-11-20 09:59:59.809993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.494 [2024-11-20 09:59:59.810705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.494 [2024-11-20 09:59:59.810713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.810984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.810993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.811127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.811136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223bd60 is same with the state(6) to be set 00:21:26.495 [2024-11-20 09:59:59.812254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.495 [2024-11-20 09:59:59.812542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.495 [2024-11-20 09:59:59.812550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.812985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.812994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.496 [2024-11-20 09:59:59.813258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.496 [2024-11-20 09:59:59.813268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.813422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.813430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22690f0 is same with the state(6) to be set 00:21:26.497 [2024-11-20 09:59:59.814746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.814987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.814995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.497 [2024-11-20 09:59:59.815168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.497 [2024-11-20 09:59:59.815177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.498 [2024-11-20 09:59:59.815818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.498 [2024-11-20 09:59:59.815826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226ba90 is same with the state(6) to be set 00:21:26.498 [2024-11-20 09:59:59.816812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.816992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.816999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.499 [2024-11-20 09:59:59.817460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.499 [2024-11-20 09:59:59.817469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.817885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.817893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226cfc0 is same with the state(6) to be set 00:21:26.500 [2024-11-20 09:59:59.818876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.818992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.818999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.819008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.819016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.819025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.819034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.819044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.819052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.819061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.819068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.500 [2024-11-20 09:59:59.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.500 [2024-11-20 09:59:59.819085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.501 [2024-11-20 09:59:59.819609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.501 [2024-11-20 09:59:59.819617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.502 [2024-11-20 09:59:59.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.502 [2024-11-20 09:59:59.819957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x31b50c0 is same with the state(6) to be set 00:21:26.502 [2024-11-20 09:59:59.820901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:26.502 [2024-11-20 09:59:59.820919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:26.502 [2024-11-20 09:59:59.820929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:26.502 [2024-11-20 09:59:59.820939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:26.502 [2024-11-20 09:59:59.820976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22868c0 (9): Bad file descriptor 00:21:26.502 [2024-11-20 09:59:59.821019] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:26.502 [2024-11-20 09:59:59.821037] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:26.502 [2024-11-20 09:59:59.821048] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:26.502 [2024-11-20 09:59:59.821110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:26.502 task offset: 30464 on job bdev=Nvme2n1 fails 00:21:26.502 00:21:26.502 Latency(us) 00:21:26.502 [2024-11-20T09:00:00.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme1n1 : 0.91 210.04 13.13 70.01 0.00 226220.62 17351.44 220700.28 00:21:26.502 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme2n1 : 0.90 212.92 13.31 70.97 0.00 219227.79 11609.23 220700.28 00:21:26.502 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme3n1 : 0.92 209.52 13.09 69.84 0.00 219052.62 15603.81 207717.91 00:21:26.502 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme4n1 : 0.92 209.00 13.06 69.67 0.00 215756.07 14854.83 212711.13 00:21:26.502 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme5n1 : 0.91 215.05 13.44 70.22 0.00 206850.82 25340.59 217704.35 00:21:26.502 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme6n1 : 0.92 208.46 13.03 69.49 0.00 208644.63 16103.13 209715.20 00:21:26.502 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme7n1 : 0.92 208.00 13.00 69.33 0.00 205303.22 16103.13 210713.84 00:21:26.502 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme8n1 : 0.93 211.86 13.24 69.18 0.00 198811.50 7021.71 214708.42 00:21:26.502 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme9n1 ended in about 0.91 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme9n1 : 0.91 211.80 13.24 70.60 0.00 193444.57 30208.98 220700.28 00:21:26.502 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.502 Job: Nvme10n1 ended in about 0.90 seconds with error 00:21:26.502 Verification LBA range: start 0x0 length 0x400 00:21:26.502 Nvme10n1 : 0.90 212.20 13.26 70.73 0.00 189206.31 15915.89 234681.30 00:21:26.502 [2024-11-20T09:00:00.085Z] =================================================================================================================== 00:21:26.503 [2024-11-20T09:00:00.085Z] Total : 2108.84 131.80 700.04 0.00 208234.93 7021.71 234681.30 00:21:26.503 [2024-11-20 09:59:59.852863] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:26.503 [2024-11-20 09:59:59.852916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:26.503 [2024-11-20 09:59:59.853257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.853277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e661b0 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.853289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e661b0 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.853497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.853510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5b640 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.853518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b640 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.853650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2291a90 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.853670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2291a90 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.853866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.853879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2286370 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.853887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286370 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.853896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.853903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.853910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.853921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.855342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:26.503 [2024-11-20 09:59:59.855366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:26.503 [2024-11-20 09:59:59.855375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:26.503 [2024-11-20 09:59:59.855668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.855685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7a610 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.855694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7a610 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.855912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.855925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22be460 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.855932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22be460 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.855947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e661b0 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.855959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b640 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.855970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2291a90 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.855979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2286370 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856019] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:26.503 [2024-11-20 09:59:59.856032] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:26.503 [2024-11-20 09:59:59.856043] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:26.503 [2024-11-20 09:59:59.856056] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:26.503 [2024-11-20 09:59:59.856208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.856222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e65d50 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.856231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e65d50 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.856445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.856458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b39b0 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.856466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b39b0 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.856674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c3a60 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.856682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c3a60 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.856692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7a610 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22be460 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.856740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.856769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.856795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.856886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:26.503 [2024-11-20 09:59:59.856910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65d50 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b39b0 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c3a60 (9): Bad file descriptor 00:21:26.503 [2024-11-20 09:59:59.856938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.856965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.856971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.856978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.856983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:26.503 [2024-11-20 09:59:59.857075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.503 [2024-11-20 09:59:59.857088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22868c0 with addr=10.0.0.2, port=4420 00:21:26.503 [2024-11-20 09:59:59.857096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22868c0 is same with the state(6) to be set 00:21:26.503 [2024-11-20 09:59:59.857103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:26.503 [2024-11-20 09:59:59.857109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:26.503 [2024-11-20 09:59:59.857116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:26.503 [2024-11-20 09:59:59.857123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:26.504 [2024-11-20 09:59:59.857130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:26.504 [2024-11-20 09:59:59.857135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:26.504 [2024-11-20 09:59:59.857142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:26.504 [2024-11-20 09:59:59.857149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:26.504 [2024-11-20 09:59:59.857157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:26.504 [2024-11-20 09:59:59.857163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:26.504 [2024-11-20 09:59:59.857169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:26.504 [2024-11-20 09:59:59.857176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:26.504 [2024-11-20 09:59:59.857208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22868c0 (9): Bad file descriptor 00:21:26.504 [2024-11-20 09:59:59.857232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:26.504 [2024-11-20 09:59:59.857240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:26.504 [2024-11-20 09:59:59.857251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:26.504 [2024-11-20 09:59:59.857259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:26.763 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2714290 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2714290 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2714290 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.700 rmmod nvme_tcp 00:21:27.700 rmmod nvme_fabrics 00:21:27.700 rmmod nvme_keyring 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2713979 ']' 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2713979 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2713979 ']' 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2713979 00:21:27.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2713979) - No such process 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2713979 is not found' 00:21:27.700 Process with pid 2713979 is not found 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.700 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:30.235 00:21:30.235 real 0m8.319s 00:21:30.235 user 0m21.813s 00:21:30.235 sys 0m1.360s 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.235 ************************************ 00:21:30.235 END TEST nvmf_shutdown_tc3 00:21:30.235 ************************************ 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:30.235 ************************************ 00:21:30.235 START TEST nvmf_shutdown_tc4 00:21:30.235 ************************************ 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.235 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.236 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.236 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.236 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.236 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:21:30.236 00:21:30.236 --- 10.0.0.2 ping statistics --- 00:21:30.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.236 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:21:30.236 00:21:30.236 --- 10.0.0.1 ping statistics --- 00:21:30.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.236 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:30.236 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2715545 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2715545 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2715545 ']' 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.237 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:30.237 [2024-11-20 10:00:03.771031] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:30.237 [2024-11-20 10:00:03.771075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.495 [2024-11-20 10:00:03.852196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.495 [2024-11-20 10:00:03.895335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.495 [2024-11-20 10:00:03.895373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.495 [2024-11-20 10:00:03.895381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.495 [2024-11-20 10:00:03.895388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.495 [2024-11-20 10:00:03.895393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.495 [2024-11-20 10:00:03.896904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.495 [2024-11-20 10:00:03.897012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.495 [2024-11-20 10:00:03.897118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.495 [2024-11-20 10:00:03.897119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.059 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:31.318 [2024-11-20 10:00:04.643318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.318 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:31.318 Malloc1 00:21:31.318 [2024-11-20 10:00:04.759481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.318 Malloc2 00:21:31.318 Malloc3 00:21:31.318 Malloc4 00:21:31.576 Malloc5 00:21:31.576 Malloc6 00:21:31.576 Malloc7 00:21:31.576 Malloc8 00:21:31.576 Malloc9 00:21:31.576 Malloc10 00:21:31.576 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.576 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:31.576 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.576 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:31.834 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2715870 00:21:31.834 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:31.834 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:31.834 [2024-11-20 10:00:05.263884] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2715545 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2715545 ']' 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2715545 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715545 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715545' 00:21:37.108 killing process with pid 2715545 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2715545 00:21:37.108 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2715545 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 [2024-11-20 10:00:10.259409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 [2024-11-20 10:00:10.260319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.108 starting I/O failed: -6 00:21:37.108 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 [2024-11-20 10:00:10.261307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 [2024-11-20 10:00:10.262781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.109 NVMe io qpair process completion error 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 [2024-11-20 10:00:10.263714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 Write completed with error (sct=0, sc=8) 00:21:37.109 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 [2024-11-20 10:00:10.264598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 [2024-11-20 10:00:10.265601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 [2024-11-20 10:00:10.267631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.110 NVMe io qpair process completion error 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 starting I/O failed: -6 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.110 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 [2024-11-20 10:00:10.268512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 [2024-11-20 10:00:10.269379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 [2024-11-20 10:00:10.270401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.111 starting I/O failed: -6 00:21:37.111 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 [2024-11-20 10:00:10.272236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.112 NVMe io qpair process completion error 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 [2024-11-20 10:00:10.273189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 [2024-11-20 10:00:10.274066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.112 starting I/O failed: -6 00:21:37.112 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 [2024-11-20 10:00:10.275092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 [2024-11-20 10:00:10.277009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.113 NVMe io qpair process completion error 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 [2024-11-20 10:00:10.277991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 [2024-11-20 10:00:10.278851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.113 Write completed with error (sct=0, sc=8) 00:21:37.113 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 [2024-11-20 10:00:10.279882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 [2024-11-20 10:00:10.281630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.114 NVMe io qpair process completion error 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 [2024-11-20 10:00:10.282851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 [2024-11-20 10:00:10.283652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.114 Write completed with error (sct=0, sc=8) 00:21:37.114 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 [2024-11-20 10:00:10.284688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 [2024-11-20 10:00:10.288337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.115 NVMe io qpair process completion error 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 [2024-11-20 10:00:10.289212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 [2024-11-20 10:00:10.289998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 [2024-11-20 10:00:10.291062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.115 starting I/O failed: -6 00:21:37.115 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 [2024-11-20 10:00:10.292908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.116 NVMe io qpair process completion error 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 [2024-11-20 10:00:10.293880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 [2024-11-20 10:00:10.294781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 [2024-11-20 10:00:10.295801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.116 Write completed with error (sct=0, sc=8) 00:21:37.116 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 [2024-11-20 10:00:10.297790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.117 NVMe io qpair process completion error 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 [2024-11-20 10:00:10.298856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 [2024-11-20 10:00:10.299758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 [2024-11-20 10:00:10.300779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 [2024-11-20 10:00:10.303484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.117 NVMe io qpair process completion error 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 [2024-11-20 10:00:10.304472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 Write completed with error (sct=0, sc=8) 00:21:37.117 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 [2024-11-20 10:00:10.305337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 [2024-11-20 10:00:10.306412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 Write completed with error (sct=0, sc=8) 00:21:37.118 starting I/O failed: -6 00:21:37.118 [2024-11-20 10:00:10.310936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:37.118 NVMe io qpair process completion error 00:21:37.118 Initializing NVMe Controllers 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:37.118 Controller IO queue size 128, less than required. 00:21:37.118 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:37.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:37.118 Initialization complete. Launching workers. 00:21:37.118 ======================================================== 00:21:37.118 Latency(us) 00:21:37.118 Device Information : IOPS MiB/s Average min max 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2284.02 98.14 56044.69 788.24 110077.54 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2163.33 92.96 59184.03 929.53 110189.00 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2189.54 94.08 58492.95 751.51 112916.40 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2183.62 93.83 58676.46 891.69 116363.12 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2178.76 93.62 58202.69 751.83 107758.40 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2156.56 92.66 58809.08 923.76 106245.73 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2166.71 93.10 58548.62 883.67 105222.30 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2193.76 94.26 57839.99 687.69 104439.25 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2180.24 93.68 58212.51 696.44 103732.54 00:21:37.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2272.18 97.63 55872.01 571.80 102520.98 00:21:37.118 ======================================================== 00:21:37.118 Total : 21968.71 943.97 57968.33 571.80 116363.12 00:21:37.118 00:21:37.118 [2024-11-20 10:00:10.315503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fdae0 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fb890 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fc740 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fca70 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fd720 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fd900 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fbbc0 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fbef0 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fc410 is same with the state(6) to be set 00:21:37.118 [2024-11-20 10:00:10.315771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fb560 is same with the state(6) to be set 00:21:37.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:37.119 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2715870 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2715870 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2715870 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.495 rmmod nvme_tcp 00:21:38.495 rmmod nvme_fabrics 00:21:38.495 rmmod nvme_keyring 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2715545 ']' 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2715545 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2715545 ']' 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2715545 00:21:38.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2715545) - No such process 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2715545 is not found' 00:21:38.495 Process with pid 2715545 is not found 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.495 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.496 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.496 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.399 00:21:40.399 real 0m10.387s 00:21:40.399 user 0m27.406s 00:21:40.399 sys 0m5.317s 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:40.399 ************************************ 00:21:40.399 END TEST nvmf_shutdown_tc4 00:21:40.399 ************************************ 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:40.399 00:21:40.399 real 0m42.843s 00:21:40.399 user 1m48.853s 00:21:40.399 sys 0m14.196s 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:40.399 ************************************ 00:21:40.399 END TEST nvmf_shutdown 00:21:40.399 ************************************ 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.399 ************************************ 00:21:40.399 START TEST nvmf_nsid 00:21:40.399 ************************************ 00:21:40.399 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:40.399 * Looking for test storage... 00:21:40.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.658 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.658 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.658 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.658 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.659 --rc genhtml_branch_coverage=1 00:21:40.659 --rc genhtml_function_coverage=1 00:21:40.659 --rc genhtml_legend=1 00:21:40.659 --rc geninfo_all_blocks=1 00:21:40.659 --rc geninfo_unexecuted_blocks=1 00:21:40.659 00:21:40.659 ' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.659 --rc genhtml_branch_coverage=1 00:21:40.659 --rc genhtml_function_coverage=1 00:21:40.659 --rc genhtml_legend=1 00:21:40.659 --rc geninfo_all_blocks=1 00:21:40.659 --rc geninfo_unexecuted_blocks=1 00:21:40.659 00:21:40.659 ' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.659 --rc genhtml_branch_coverage=1 00:21:40.659 --rc genhtml_function_coverage=1 00:21:40.659 --rc genhtml_legend=1 00:21:40.659 --rc geninfo_all_blocks=1 00:21:40.659 --rc geninfo_unexecuted_blocks=1 00:21:40.659 00:21:40.659 ' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.659 --rc genhtml_branch_coverage=1 00:21:40.659 --rc genhtml_function_coverage=1 00:21:40.659 --rc genhtml_legend=1 00:21:40.659 --rc geninfo_all_blocks=1 00:21:40.659 --rc geninfo_unexecuted_blocks=1 00:21:40.659 00:21:40.659 ' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:40.659 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.660 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.225 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.226 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.226 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.226 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:21:47.226 00:21:47.226 --- 10.0.0.2 ping statistics --- 00:21:47.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.226 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:47.226 00:21:47.226 --- 10.0.0.1 ping statistics --- 00:21:47.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.226 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2720814 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2720814 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2720814 ']' 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.226 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.226 [2024-11-20 10:00:20.116510] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:47.226 [2024-11-20 10:00:20.116553] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.226 [2024-11-20 10:00:20.195793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.226 [2024-11-20 10:00:20.235693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.226 [2024-11-20 10:00:20.235731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.226 [2024-11-20 10:00:20.235739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.226 [2024-11-20 10:00:20.235745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.226 [2024-11-20 10:00:20.235750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.226 [2024-11-20 10:00:20.236328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2720842 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=016be160-5d5f-4023-a67b-48f14bd4f0c9 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4c8b6e89-7743-4638-8b20-fc0b9b66c0a9 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1bb7b072-7f53-436c-9091-82c93b69b91e 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.227 null0 00:21:47.227 null1 00:21:47.227 null2 00:21:47.227 [2024-11-20 10:00:20.427298] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:21:47.227 [2024-11-20 10:00:20.427343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720842 ] 00:21:47.227 [2024-11-20 10:00:20.430960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.227 [2024-11-20 10:00:20.455166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2720842 /var/tmp/tgt2.sock 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2720842 ']' 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:47.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:47.227 [2024-11-20 10:00:20.502871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.227 [2024-11-20 10:00:20.543644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:47.227 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:47.794 [2024-11-20 10:00:21.071597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.794 [2024-11-20 10:00:21.087699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:47.794 nvme0n1 nvme0n2 00:21:47.794 nvme1n1 00:21:47.794 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:47.794 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:47.794 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:48.729 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:48.730 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 016be160-5d5f-4023-a67b-48f14bd4f0c9 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:49.665 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=016be1605d5f4023a67b48f14bd4f0c9 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 016BE1605D5F4023A67B48F14BD4F0C9 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 016BE1605D5F4023A67B48F14BD4F0C9 == \0\1\6\B\E\1\6\0\5\D\5\F\4\0\2\3\A\6\7\B\4\8\F\1\4\B\D\4\F\0\C\9 ]] 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4c8b6e89-7743-4638-8b20-fc0b9b66c0a9 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4c8b6e89774346388b20fc0b9b66c0a9 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4C8B6E89774346388B20FC0B9B66C0A9 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4C8B6E89774346388B20FC0B9B66C0A9 == \4\C\8\B\6\E\8\9\7\7\4\3\4\6\3\8\8\B\2\0\F\C\0\B\9\B\6\6\C\0\A\9 ]] 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1bb7b072-7f53-436c-9091-82c93b69b91e 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1bb7b0727f53436c909182c93b69b91e 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1BB7B0727F53436C909182C93B69B91E 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1BB7B0727F53436C909182C93B69B91E == \1\B\B\7\B\0\7\2\7\F\5\3\4\3\6\C\9\0\9\1\8\2\C\9\3\B\6\9\B\9\1\E ]] 00:21:49.924 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2720842 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2720842 ']' 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2720842 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720842 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720842' 00:21:50.183 killing process with pid 2720842 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2720842 00:21:50.183 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2720842 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.442 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.442 rmmod nvme_tcp 00:21:50.442 rmmod nvme_fabrics 00:21:50.442 rmmod nvme_keyring 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2720814 ']' 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2720814 ']' 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720814' 00:21:50.701 killing process with pid 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2720814 00:21:50.701 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.702 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.237 00:21:53.237 real 0m12.415s 00:21:53.237 user 0m9.707s 00:21:53.237 sys 0m5.442s 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:53.237 ************************************ 00:21:53.237 END TEST nvmf_nsid 00:21:53.237 ************************************ 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:53.237 00:21:53.237 real 12m4.793s 00:21:53.237 user 26m4.300s 00:21:53.237 sys 3m43.490s 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.237 10:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:53.237 ************************************ 00:21:53.237 END TEST nvmf_target_extra 00:21:53.237 ************************************ 00:21:53.237 10:00:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:53.237 10:00:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.237 10:00:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.237 10:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.237 ************************************ 00:21:53.237 START TEST nvmf_host 00:21:53.237 ************************************ 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:53.237 * Looking for test storage... 00:21:53.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:53.237 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.238 --rc genhtml_branch_coverage=1 00:21:53.238 --rc genhtml_function_coverage=1 00:21:53.238 --rc genhtml_legend=1 00:21:53.238 --rc geninfo_all_blocks=1 00:21:53.238 --rc geninfo_unexecuted_blocks=1 00:21:53.238 00:21:53.238 ' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.238 --rc genhtml_branch_coverage=1 00:21:53.238 --rc genhtml_function_coverage=1 00:21:53.238 --rc genhtml_legend=1 00:21:53.238 --rc geninfo_all_blocks=1 00:21:53.238 --rc geninfo_unexecuted_blocks=1 00:21:53.238 00:21:53.238 ' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.238 --rc genhtml_branch_coverage=1 00:21:53.238 --rc genhtml_function_coverage=1 00:21:53.238 --rc genhtml_legend=1 00:21:53.238 --rc geninfo_all_blocks=1 00:21:53.238 --rc geninfo_unexecuted_blocks=1 00:21:53.238 00:21:53.238 ' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.238 --rc genhtml_branch_coverage=1 00:21:53.238 --rc genhtml_function_coverage=1 00:21:53.238 --rc genhtml_legend=1 00:21:53.238 --rc geninfo_all_blocks=1 00:21:53.238 --rc geninfo_unexecuted_blocks=1 00:21:53.238 00:21:53.238 ' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.238 ************************************ 00:21:53.238 START TEST nvmf_multicontroller 00:21:53.238 ************************************ 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:53.238 * Looking for test storage... 00:21:53.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.238 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.499 --rc genhtml_branch_coverage=1 00:21:53.499 --rc genhtml_function_coverage=1 00:21:53.499 --rc genhtml_legend=1 00:21:53.499 --rc geninfo_all_blocks=1 00:21:53.499 --rc geninfo_unexecuted_blocks=1 00:21:53.499 00:21:53.499 ' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.499 --rc genhtml_branch_coverage=1 00:21:53.499 --rc genhtml_function_coverage=1 00:21:53.499 --rc genhtml_legend=1 00:21:53.499 --rc geninfo_all_blocks=1 00:21:53.499 --rc geninfo_unexecuted_blocks=1 00:21:53.499 00:21:53.499 ' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.499 --rc genhtml_branch_coverage=1 00:21:53.499 --rc genhtml_function_coverage=1 00:21:53.499 --rc genhtml_legend=1 00:21:53.499 --rc geninfo_all_blocks=1 00:21:53.499 --rc geninfo_unexecuted_blocks=1 00:21:53.499 00:21:53.499 ' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.499 --rc genhtml_branch_coverage=1 00:21:53.499 --rc genhtml_function_coverage=1 00:21:53.499 --rc genhtml_legend=1 00:21:53.499 --rc geninfo_all_blocks=1 00:21:53.499 --rc geninfo_unexecuted_blocks=1 00:21:53.499 00:21:53.499 ' 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.499 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.500 10:00:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.070 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.071 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.071 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:22:00.071 00:22:00.071 --- 10.0.0.2 ping statistics --- 00:22:00.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.071 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:00.071 00:22:00.071 --- 10.0.0.1 ping statistics --- 00:22:00.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.071 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2725143 00:22:00.071 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2725143 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2725143 ']' 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.072 10:00:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.072 [2024-11-20 10:00:32.920143] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:00.072 [2024-11-20 10:00:32.920197] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.072 [2024-11-20 10:00:33.000138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:00.072 [2024-11-20 10:00:33.043315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.072 [2024-11-20 10:00:33.043350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.072 [2024-11-20 10:00:33.043357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.072 [2024-11-20 10:00:33.043363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.072 [2024-11-20 10:00:33.043368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.072 [2024-11-20 10:00:33.044801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.072 [2024-11-20 10:00:33.044909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.072 [2024-11-20 10:00:33.044910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.331 [2024-11-20 10:00:33.802619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.331 Malloc0 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.331 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 [2024-11-20 10:00:33.867469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 [2024-11-20 10:00:33.875369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 Malloc1 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.332 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2725240 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2725240 /var/tmp/bdevperf.sock 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2725240 ']' 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.591 10:00:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.851 NVMe0n1 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.851 1 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.851 request: 00:22:00.851 { 00:22:00.851 "name": "NVMe0", 00:22:00.851 "trtype": "tcp", 00:22:00.851 "traddr": "10.0.0.2", 00:22:00.851 "adrfam": "ipv4", 00:22:00.851 "trsvcid": "4420", 00:22:00.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.851 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:00.851 "hostaddr": "10.0.0.1", 00:22:00.851 "prchk_reftag": false, 00:22:00.851 "prchk_guard": false, 00:22:00.851 "hdgst": false, 00:22:00.851 "ddgst": false, 00:22:00.851 "allow_unrecognized_csi": false, 00:22:00.851 "method": "bdev_nvme_attach_controller", 00:22:00.851 "req_id": 1 00:22:00.851 } 00:22:00.851 Got JSON-RPC error response 00:22:00.851 response: 00:22:00.851 { 00:22:00.851 "code": -114, 00:22:00.851 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:00.851 } 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:00.851 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 request: 00:22:01.110 { 00:22:01.110 "name": "NVMe0", 00:22:01.110 "trtype": "tcp", 00:22:01.110 "traddr": "10.0.0.2", 00:22:01.110 "adrfam": "ipv4", 00:22:01.110 "trsvcid": "4420", 00:22:01.110 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.110 "hostaddr": "10.0.0.1", 00:22:01.110 "prchk_reftag": false, 00:22:01.110 "prchk_guard": false, 00:22:01.110 "hdgst": false, 00:22:01.110 "ddgst": false, 00:22:01.110 "allow_unrecognized_csi": false, 00:22:01.110 "method": "bdev_nvme_attach_controller", 00:22:01.110 "req_id": 1 00:22:01.110 } 00:22:01.110 Got JSON-RPC error response 00:22:01.110 response: 00:22:01.110 { 00:22:01.110 "code": -114, 00:22:01.110 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:01.110 } 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.110 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.110 request: 00:22:01.110 { 00:22:01.110 "name": "NVMe0", 00:22:01.110 "trtype": "tcp", 00:22:01.110 "traddr": "10.0.0.2", 00:22:01.110 "adrfam": "ipv4", 00:22:01.110 "trsvcid": "4420", 00:22:01.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.110 "hostaddr": "10.0.0.1", 00:22:01.110 "prchk_reftag": false, 00:22:01.110 "prchk_guard": false, 00:22:01.110 "hdgst": false, 00:22:01.110 "ddgst": false, 00:22:01.110 "multipath": "disable", 00:22:01.110 "allow_unrecognized_csi": false, 00:22:01.111 "method": "bdev_nvme_attach_controller", 00:22:01.111 "req_id": 1 00:22:01.111 } 00:22:01.111 Got JSON-RPC error response 00:22:01.111 response: 00:22:01.111 { 00:22:01.111 "code": -114, 00:22:01.111 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:01.111 } 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.111 request: 00:22:01.111 { 00:22:01.111 "name": "NVMe0", 00:22:01.111 "trtype": "tcp", 00:22:01.111 "traddr": "10.0.0.2", 00:22:01.111 "adrfam": "ipv4", 00:22:01.111 "trsvcid": "4420", 00:22:01.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.111 "hostaddr": "10.0.0.1", 00:22:01.111 "prchk_reftag": false, 00:22:01.111 "prchk_guard": false, 00:22:01.111 "hdgst": false, 00:22:01.111 "ddgst": false, 00:22:01.111 "multipath": "failover", 00:22:01.111 "allow_unrecognized_csi": false, 00:22:01.111 "method": "bdev_nvme_attach_controller", 00:22:01.111 "req_id": 1 00:22:01.111 } 00:22:01.111 Got JSON-RPC error response 00:22:01.111 response: 00:22:01.111 { 00:22:01.111 "code": -114, 00:22:01.111 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:01.111 } 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.111 NVMe0n1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.111 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.370 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:01.370 10:00:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.304 { 00:22:02.304 "results": [ 00:22:02.304 { 00:22:02.304 "job": "NVMe0n1", 00:22:02.304 "core_mask": "0x1", 00:22:02.304 "workload": "write", 00:22:02.304 "status": "finished", 00:22:02.304 "queue_depth": 128, 00:22:02.304 "io_size": 4096, 00:22:02.304 "runtime": 1.004565, 00:22:02.304 "iops": 25224.848566294862, 00:22:02.304 "mibps": 98.5345647120893, 00:22:02.304 "io_failed": 0, 00:22:02.304 "io_timeout": 0, 00:22:02.304 "avg_latency_us": 5068.09947577705, 00:22:02.304 "min_latency_us": 3276.8, 00:22:02.304 "max_latency_us": 10485.76 00:22:02.304 } 00:22:02.304 ], 00:22:02.304 "core_count": 1 00:22:02.304 } 00:22:02.304 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:02.304 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.304 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2725240 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2725240 ']' 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2725240 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725240 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725240' 00:22:02.563 killing process with pid 2725240 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2725240 00:22:02.563 10:00:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2725240 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:02.563 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:02.823 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.823 [2024-11-20 10:00:33.982303] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:02.823 [2024-11-20 10:00:33.982354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725240 ] 00:22:02.823 [2024-11-20 10:00:34.059516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.823 [2024-11-20 10:00:34.101955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.823 [2024-11-20 10:00:34.732314] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 51389c60-e663-4f16-b5bb-a87ade68228e already exists 00:22:02.823 [2024-11-20 10:00:34.732344] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:51389c60-e663-4f16-b5bb-a87ade68228e alias for bdev NVMe1n1 00:22:02.823 [2024-11-20 10:00:34.732352] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:02.823 Running I/O for 1 seconds... 00:22:02.823 25212.00 IOPS, 98.48 MiB/s 00:22:02.823 Latency(us) 00:22:02.823 [2024-11-20T09:00:36.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.823 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:02.823 NVMe0n1 : 1.00 25224.85 98.53 0.00 0.00 5068.10 3276.80 10485.76 00:22:02.823 [2024-11-20T09:00:36.405Z] =================================================================================================================== 00:22:02.823 [2024-11-20T09:00:36.405Z] Total : 25224.85 98.53 0.00 0.00 5068.10 3276.80 10485.76 00:22:02.823 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.823 00:22:02.823 Latency(us) 00:22:02.823 [2024-11-20T09:00:36.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.823 [2024-11-20T09:00:36.405Z] =================================================================================================================== 00:22:02.823 [2024-11-20T09:00:36.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.823 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.823 rmmod nvme_tcp 00:22:02.823 rmmod nvme_fabrics 00:22:02.823 rmmod nvme_keyring 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2725143 ']' 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2725143 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2725143 ']' 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2725143 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725143 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725143' 00:22:02.823 killing process with pid 2725143 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2725143 00:22:02.823 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2725143 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.083 10:00:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.987 10:00:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.987 00:22:04.987 real 0m11.874s 00:22:04.987 user 0m14.435s 00:22:04.987 sys 0m5.256s 00:22:04.987 10:00:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.987 10:00:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:04.987 ************************************ 00:22:04.987 END TEST nvmf_multicontroller 00:22:04.987 ************************************ 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.246 ************************************ 00:22:05.246 START TEST nvmf_aer 00:22:05.246 ************************************ 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.246 * Looking for test storage... 00:22:05.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:05.246 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.247 --rc genhtml_branch_coverage=1 00:22:05.247 --rc genhtml_function_coverage=1 00:22:05.247 --rc genhtml_legend=1 00:22:05.247 --rc geninfo_all_blocks=1 00:22:05.247 --rc geninfo_unexecuted_blocks=1 00:22:05.247 00:22:05.247 ' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.247 --rc genhtml_branch_coverage=1 00:22:05.247 --rc genhtml_function_coverage=1 00:22:05.247 --rc genhtml_legend=1 00:22:05.247 --rc geninfo_all_blocks=1 00:22:05.247 --rc geninfo_unexecuted_blocks=1 00:22:05.247 00:22:05.247 ' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.247 --rc genhtml_branch_coverage=1 00:22:05.247 --rc genhtml_function_coverage=1 00:22:05.247 --rc genhtml_legend=1 00:22:05.247 --rc geninfo_all_blocks=1 00:22:05.247 --rc geninfo_unexecuted_blocks=1 00:22:05.247 00:22:05.247 ' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.247 --rc genhtml_branch_coverage=1 00:22:05.247 --rc genhtml_function_coverage=1 00:22:05.247 --rc genhtml_legend=1 00:22:05.247 --rc geninfo_all_blocks=1 00:22:05.247 --rc geninfo_unexecuted_blocks=1 00:22:05.247 00:22:05.247 ' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.247 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.248 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.248 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.248 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.248 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.248 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.506 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.506 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.506 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.506 10:00:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.075 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.075 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.075 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.075 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.075 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.076 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.076 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:22:12.076 00:22:12.076 --- 10.0.0.2 ping statistics --- 00:22:12.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.076 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:22:12.076 00:22:12.076 --- 10.0.0.1 ping statistics --- 00:22:12.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.076 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2729169 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.076 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2729169 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2729169 ']' 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.077 10:00:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 [2024-11-20 10:00:44.815882] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:12.077 [2024-11-20 10:00:44.815926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.077 [2024-11-20 10:00:44.894855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.077 [2024-11-20 10:00:44.936891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.077 [2024-11-20 10:00:44.936928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.077 [2024-11-20 10:00:44.936935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.077 [2024-11-20 10:00:44.936941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.077 [2024-11-20 10:00:44.936946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.077 [2024-11-20 10:00:44.938529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.077 [2024-11-20 10:00:44.938641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.077 [2024-11-20 10:00:44.938778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.077 [2024-11-20 10:00:44.938779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 [2024-11-20 10:00:45.074581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 Malloc0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 [2024-11-20 10:00:45.130970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 [ 00:22:12.077 { 00:22:12.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.077 "subtype": "Discovery", 00:22:12.077 "listen_addresses": [], 00:22:12.077 "allow_any_host": true, 00:22:12.077 "hosts": [] 00:22:12.077 }, 00:22:12.077 { 00:22:12.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.077 "subtype": "NVMe", 00:22:12.077 "listen_addresses": [ 00:22:12.077 { 00:22:12.077 "trtype": "TCP", 00:22:12.077 "adrfam": "IPv4", 00:22:12.077 "traddr": "10.0.0.2", 00:22:12.077 "trsvcid": "4420" 00:22:12.077 } 00:22:12.077 ], 00:22:12.077 "allow_any_host": true, 00:22:12.077 "hosts": [], 00:22:12.077 "serial_number": "SPDK00000000000001", 00:22:12.077 "model_number": "SPDK bdev Controller", 00:22:12.077 "max_namespaces": 2, 00:22:12.077 "min_cntlid": 1, 00:22:12.077 "max_cntlid": 65519, 00:22:12.077 "namespaces": [ 00:22:12.077 { 00:22:12.077 "nsid": 1, 00:22:12.077 "bdev_name": "Malloc0", 00:22:12.077 "name": "Malloc0", 00:22:12.077 "nguid": "1DA74B0E51A0432CAF1BC863DE542A61", 00:22:12.077 "uuid": "1da74b0e-51a0-432c-af1b-c863de542a61" 00:22:12.077 } 00:22:12.077 ] 00:22:12.077 } 00:22:12.077 ] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2729202 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 Malloc1 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.077 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.077 Asynchronous Event Request test 00:22:12.077 Attaching to 10.0.0.2 00:22:12.077 Attached to 10.0.0.2 00:22:12.077 Registering asynchronous event callbacks... 00:22:12.077 Starting namespace attribute notice tests for all controllers... 00:22:12.077 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:12.077 aer_cb - Changed Namespace 00:22:12.077 Cleaning up... 00:22:12.077 [ 00:22:12.077 { 00:22:12.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.077 "subtype": "Discovery", 00:22:12.077 "listen_addresses": [], 00:22:12.077 "allow_any_host": true, 00:22:12.077 "hosts": [] 00:22:12.077 }, 00:22:12.077 { 00:22:12.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.077 "subtype": "NVMe", 00:22:12.077 "listen_addresses": [ 00:22:12.077 { 00:22:12.077 "trtype": "TCP", 00:22:12.077 "adrfam": "IPv4", 00:22:12.077 "traddr": "10.0.0.2", 00:22:12.077 "trsvcid": "4420" 00:22:12.077 } 00:22:12.077 ], 00:22:12.077 "allow_any_host": true, 00:22:12.077 "hosts": [], 00:22:12.077 "serial_number": "SPDK00000000000001", 00:22:12.077 "model_number": "SPDK bdev Controller", 00:22:12.077 "max_namespaces": 2, 00:22:12.077 "min_cntlid": 1, 00:22:12.077 "max_cntlid": 65519, 00:22:12.077 "namespaces": [ 00:22:12.077 { 00:22:12.077 "nsid": 1, 00:22:12.077 "bdev_name": "Malloc0", 00:22:12.077 "name": "Malloc0", 00:22:12.077 "nguid": "1DA74B0E51A0432CAF1BC863DE542A61", 00:22:12.078 "uuid": "1da74b0e-51a0-432c-af1b-c863de542a61" 00:22:12.078 }, 00:22:12.078 { 00:22:12.078 "nsid": 2, 00:22:12.078 "bdev_name": "Malloc1", 00:22:12.078 "name": "Malloc1", 00:22:12.078 "nguid": "FD2225B7220C4A2CAEE73B779D55D797", 00:22:12.078 "uuid": "fd2225b7-220c-4a2c-aee7-3b779d55d797" 00:22:12.078 } 00:22:12.078 ] 00:22:12.078 } 00:22:12.078 ] 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2729202 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.078 rmmod nvme_tcp 00:22:12.078 rmmod nvme_fabrics 00:22:12.078 rmmod nvme_keyring 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2729169 ']' 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2729169 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2729169 ']' 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2729169 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729169 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729169' 00:22:12.078 killing process with pid 2729169 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2729169 00:22:12.078 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2729169 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.337 10:00:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.872 00:22:14.872 real 0m9.220s 00:22:14.872 user 0m5.030s 00:22:14.872 sys 0m4.876s 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.872 ************************************ 00:22:14.872 END TEST nvmf_aer 00:22:14.872 ************************************ 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.872 ************************************ 00:22:14.872 START TEST nvmf_async_init 00:22:14.872 ************************************ 00:22:14.872 10:00:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:14.872 * Looking for test storage... 00:22:14.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.872 --rc genhtml_branch_coverage=1 00:22:14.872 --rc genhtml_function_coverage=1 00:22:14.872 --rc genhtml_legend=1 00:22:14.872 --rc geninfo_all_blocks=1 00:22:14.872 --rc geninfo_unexecuted_blocks=1 00:22:14.872 00:22:14.872 ' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.872 --rc genhtml_branch_coverage=1 00:22:14.872 --rc genhtml_function_coverage=1 00:22:14.872 --rc genhtml_legend=1 00:22:14.872 --rc geninfo_all_blocks=1 00:22:14.872 --rc geninfo_unexecuted_blocks=1 00:22:14.872 00:22:14.872 ' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.872 --rc genhtml_branch_coverage=1 00:22:14.872 --rc genhtml_function_coverage=1 00:22:14.872 --rc genhtml_legend=1 00:22:14.872 --rc geninfo_all_blocks=1 00:22:14.872 --rc geninfo_unexecuted_blocks=1 00:22:14.872 00:22:14.872 ' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.872 --rc genhtml_branch_coverage=1 00:22:14.872 --rc genhtml_function_coverage=1 00:22:14.872 --rc genhtml_legend=1 00:22:14.872 --rc geninfo_all_blocks=1 00:22:14.872 --rc geninfo_unexecuted_blocks=1 00:22:14.872 00:22:14.872 ' 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.872 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ce9e3f4dc1854733b7f3988fd6b6ba8d 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.873 10:00:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.447 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.447 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.447 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.448 10:00:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:22:21.448 00:22:21.448 --- 10.0.0.2 ping statistics --- 00:22:21.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.448 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:21.448 00:22:21.448 --- 10.0.0.1 ping statistics --- 00:22:21.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.448 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2732769 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2732769 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2732769 ']' 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 [2024-11-20 10:00:54.135153] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:21.448 [2024-11-20 10:00:54.135213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.448 [2024-11-20 10:00:54.216821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.448 [2024-11-20 10:00:54.260801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.448 [2024-11-20 10:00:54.260832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.448 [2024-11-20 10:00:54.260840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.448 [2024-11-20 10:00:54.260848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.448 [2024-11-20 10:00:54.260855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.448 [2024-11-20 10:00:54.261332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 [2024-11-20 10:00:54.409437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 null0 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ce9e3f4dc1854733b7f3988fd6b6ba8d 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 [2024-11-20 10:00:54.453703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 nvme0n1 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.448 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.448 [ 00:22:21.448 { 00:22:21.448 "name": "nvme0n1", 00:22:21.448 "aliases": [ 00:22:21.448 "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d" 00:22:21.448 ], 00:22:21.448 "product_name": "NVMe disk", 00:22:21.448 "block_size": 512, 00:22:21.448 "num_blocks": 2097152, 00:22:21.448 "uuid": "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d", 00:22:21.448 "numa_id": 1, 00:22:21.448 "assigned_rate_limits": { 00:22:21.448 "rw_ios_per_sec": 0, 00:22:21.448 "rw_mbytes_per_sec": 0, 00:22:21.448 "r_mbytes_per_sec": 0, 00:22:21.448 "w_mbytes_per_sec": 0 00:22:21.448 }, 00:22:21.448 "claimed": false, 00:22:21.448 "zoned": false, 00:22:21.448 "supported_io_types": { 00:22:21.448 "read": true, 00:22:21.448 "write": true, 00:22:21.448 "unmap": false, 00:22:21.448 "flush": true, 00:22:21.448 "reset": true, 00:22:21.448 "nvme_admin": true, 00:22:21.448 "nvme_io": true, 00:22:21.448 "nvme_io_md": false, 00:22:21.448 "write_zeroes": true, 00:22:21.448 "zcopy": false, 00:22:21.448 "get_zone_info": false, 00:22:21.448 "zone_management": false, 00:22:21.448 "zone_append": false, 00:22:21.448 "compare": true, 00:22:21.448 "compare_and_write": true, 00:22:21.449 "abort": true, 00:22:21.449 "seek_hole": false, 00:22:21.449 "seek_data": false, 00:22:21.449 "copy": true, 00:22:21.449 "nvme_iov_md": false 00:22:21.449 }, 00:22:21.449 "memory_domains": [ 00:22:21.449 { 00:22:21.449 "dma_device_id": "system", 00:22:21.449 "dma_device_type": 1 00:22:21.449 } 00:22:21.449 ], 00:22:21.449 "driver_specific": { 00:22:21.449 "nvme": [ 00:22:21.449 { 00:22:21.449 "trid": { 00:22:21.449 "trtype": "TCP", 00:22:21.449 "adrfam": "IPv4", 00:22:21.449 "traddr": "10.0.0.2", 00:22:21.449 "trsvcid": "4420", 00:22:21.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:21.449 }, 00:22:21.449 "ctrlr_data": { 00:22:21.449 "cntlid": 1, 00:22:21.449 "vendor_id": "0x8086", 00:22:21.449 "model_number": "SPDK bdev Controller", 00:22:21.449 "serial_number": "00000000000000000000", 00:22:21.449 "firmware_revision": "25.01", 00:22:21.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.449 "oacs": { 00:22:21.449 "security": 0, 00:22:21.449 "format": 0, 00:22:21.449 "firmware": 0, 00:22:21.449 "ns_manage": 0 00:22:21.449 }, 00:22:21.449 "multi_ctrlr": true, 00:22:21.449 "ana_reporting": false 00:22:21.449 }, 00:22:21.449 "vs": { 00:22:21.449 "nvme_version": "1.3" 00:22:21.449 }, 00:22:21.449 "ns_data": { 00:22:21.449 "id": 1, 00:22:21.449 "can_share": true 00:22:21.449 } 00:22:21.449 } 00:22:21.449 ], 00:22:21.449 "mp_policy": "active_passive" 00:22:21.449 } 00:22:21.449 } 00:22:21.449 ] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 [2024-11-20 10:00:54.715374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:21.449 [2024-11-20 10:00:54.715449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdc220 (9): Bad file descriptor 00:22:21.449 [2024-11-20 10:00:54.847284] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 [ 00:22:21.449 { 00:22:21.449 "name": "nvme0n1", 00:22:21.449 "aliases": [ 00:22:21.449 "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d" 00:22:21.449 ], 00:22:21.449 "product_name": "NVMe disk", 00:22:21.449 "block_size": 512, 00:22:21.449 "num_blocks": 2097152, 00:22:21.449 "uuid": "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d", 00:22:21.449 "numa_id": 1, 00:22:21.449 "assigned_rate_limits": { 00:22:21.449 "rw_ios_per_sec": 0, 00:22:21.449 "rw_mbytes_per_sec": 0, 00:22:21.449 "r_mbytes_per_sec": 0, 00:22:21.449 "w_mbytes_per_sec": 0 00:22:21.449 }, 00:22:21.449 "claimed": false, 00:22:21.449 "zoned": false, 00:22:21.449 "supported_io_types": { 00:22:21.449 "read": true, 00:22:21.449 "write": true, 00:22:21.449 "unmap": false, 00:22:21.449 "flush": true, 00:22:21.449 "reset": true, 00:22:21.449 "nvme_admin": true, 00:22:21.449 "nvme_io": true, 00:22:21.449 "nvme_io_md": false, 00:22:21.449 "write_zeroes": true, 00:22:21.449 "zcopy": false, 00:22:21.449 "get_zone_info": false, 00:22:21.449 "zone_management": false, 00:22:21.449 "zone_append": false, 00:22:21.449 "compare": true, 00:22:21.449 "compare_and_write": true, 00:22:21.449 "abort": true, 00:22:21.449 "seek_hole": false, 00:22:21.449 "seek_data": false, 00:22:21.449 "copy": true, 00:22:21.449 "nvme_iov_md": false 00:22:21.449 }, 00:22:21.449 "memory_domains": [ 00:22:21.449 { 00:22:21.449 "dma_device_id": "system", 00:22:21.449 "dma_device_type": 1 00:22:21.449 } 00:22:21.449 ], 00:22:21.449 "driver_specific": { 00:22:21.449 "nvme": [ 00:22:21.449 { 00:22:21.449 "trid": { 00:22:21.449 "trtype": "TCP", 00:22:21.449 "adrfam": "IPv4", 00:22:21.449 "traddr": "10.0.0.2", 00:22:21.449 "trsvcid": "4420", 00:22:21.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:21.449 }, 00:22:21.449 "ctrlr_data": { 00:22:21.449 "cntlid": 2, 00:22:21.449 "vendor_id": "0x8086", 00:22:21.449 "model_number": "SPDK bdev Controller", 00:22:21.449 "serial_number": "00000000000000000000", 00:22:21.449 "firmware_revision": "25.01", 00:22:21.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.449 "oacs": { 00:22:21.449 "security": 0, 00:22:21.449 "format": 0, 00:22:21.449 "firmware": 0, 00:22:21.449 "ns_manage": 0 00:22:21.449 }, 00:22:21.449 "multi_ctrlr": true, 00:22:21.449 "ana_reporting": false 00:22:21.449 }, 00:22:21.449 "vs": { 00:22:21.449 "nvme_version": "1.3" 00:22:21.449 }, 00:22:21.449 "ns_data": { 00:22:21.449 "id": 1, 00:22:21.449 "can_share": true 00:22:21.449 } 00:22:21.449 } 00:22:21.449 ], 00:22:21.449 "mp_policy": "active_passive" 00:22:21.449 } 00:22:21.449 } 00:22:21.449 ] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Z0pm4OstZY 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Z0pm4OstZY 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Z0pm4OstZY 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 [2024-11-20 10:00:54.920006] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.449 [2024-11-20 10:00:54.920110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 [2024-11-20 10:00:54.936057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.449 nvme0n1 00:22:21.449 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.449 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:21.449 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.449 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.449 [ 00:22:21.449 { 00:22:21.449 "name": "nvme0n1", 00:22:21.449 "aliases": [ 00:22:21.449 "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d" 00:22:21.449 ], 00:22:21.449 "product_name": "NVMe disk", 00:22:21.449 "block_size": 512, 00:22:21.449 "num_blocks": 2097152, 00:22:21.449 "uuid": "ce9e3f4d-c185-4733-b7f3-988fd6b6ba8d", 00:22:21.449 "numa_id": 1, 00:22:21.449 "assigned_rate_limits": { 00:22:21.449 "rw_ios_per_sec": 0, 00:22:21.449 "rw_mbytes_per_sec": 0, 00:22:21.449 "r_mbytes_per_sec": 0, 00:22:21.449 "w_mbytes_per_sec": 0 00:22:21.449 }, 00:22:21.449 "claimed": false, 00:22:21.449 "zoned": false, 00:22:21.449 "supported_io_types": { 00:22:21.449 "read": true, 00:22:21.449 "write": true, 00:22:21.449 "unmap": false, 00:22:21.450 "flush": true, 00:22:21.450 "reset": true, 00:22:21.450 "nvme_admin": true, 00:22:21.450 "nvme_io": true, 00:22:21.450 "nvme_io_md": false, 00:22:21.450 "write_zeroes": true, 00:22:21.450 "zcopy": false, 00:22:21.450 "get_zone_info": false, 00:22:21.450 "zone_management": false, 00:22:21.450 "zone_append": false, 00:22:21.450 "compare": true, 00:22:21.450 "compare_and_write": true, 00:22:21.450 "abort": true, 00:22:21.450 "seek_hole": false, 00:22:21.450 "seek_data": false, 00:22:21.450 "copy": true, 00:22:21.450 "nvme_iov_md": false 00:22:21.450 }, 00:22:21.450 "memory_domains": [ 00:22:21.450 { 00:22:21.450 "dma_device_id": "system", 00:22:21.450 "dma_device_type": 1 00:22:21.450 } 00:22:21.450 ], 00:22:21.450 "driver_specific": { 00:22:21.450 "nvme": [ 00:22:21.450 { 00:22:21.450 "trid": { 00:22:21.450 "trtype": "TCP", 00:22:21.450 "adrfam": "IPv4", 00:22:21.450 "traddr": "10.0.0.2", 00:22:21.450 "trsvcid": "4421", 00:22:21.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:21.450 }, 00:22:21.450 "ctrlr_data": { 00:22:21.450 "cntlid": 3, 00:22:21.450 "vendor_id": "0x8086", 00:22:21.450 "model_number": "SPDK bdev Controller", 00:22:21.450 "serial_number": "00000000000000000000", 00:22:21.450 "firmware_revision": "25.01", 00:22:21.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.450 "oacs": { 00:22:21.450 "security": 0, 00:22:21.450 "format": 0, 00:22:21.450 "firmware": 0, 00:22:21.450 "ns_manage": 0 00:22:21.450 }, 00:22:21.450 "multi_ctrlr": true, 00:22:21.450 "ana_reporting": false 00:22:21.450 }, 00:22:21.450 "vs": { 00:22:21.450 "nvme_version": "1.3" 00:22:21.450 }, 00:22:21.450 "ns_data": { 00:22:21.450 "id": 1, 00:22:21.450 "can_share": true 00:22:21.450 } 00:22:21.450 } 00:22:21.450 ], 00:22:21.450 "mp_policy": "active_passive" 00:22:21.450 } 00:22:21.450 } 00:22:21.450 ] 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Z0pm4OstZY 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.708 rmmod nvme_tcp 00:22:21.708 rmmod nvme_fabrics 00:22:21.708 rmmod nvme_keyring 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2732769 ']' 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2732769 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2732769 ']' 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2732769 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2732769 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2732769' 00:22:21.708 killing process with pid 2732769 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2732769 00:22:21.708 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2732769 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.966 10:00:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.934 00:22:23.934 real 0m9.464s 00:22:23.934 user 0m3.065s 00:22:23.934 sys 0m4.841s 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.934 ************************************ 00:22:23.934 END TEST nvmf_async_init 00:22:23.934 ************************************ 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.934 ************************************ 00:22:23.934 START TEST dma 00:22:23.934 ************************************ 00:22:23.934 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:24.208 * Looking for test storage... 00:22:24.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.208 --rc genhtml_branch_coverage=1 00:22:24.208 --rc genhtml_function_coverage=1 00:22:24.208 --rc genhtml_legend=1 00:22:24.208 --rc geninfo_all_blocks=1 00:22:24.208 --rc geninfo_unexecuted_blocks=1 00:22:24.208 00:22:24.208 ' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.208 --rc genhtml_branch_coverage=1 00:22:24.208 --rc genhtml_function_coverage=1 00:22:24.208 --rc genhtml_legend=1 00:22:24.208 --rc geninfo_all_blocks=1 00:22:24.208 --rc geninfo_unexecuted_blocks=1 00:22:24.208 00:22:24.208 ' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.208 --rc genhtml_branch_coverage=1 00:22:24.208 --rc genhtml_function_coverage=1 00:22:24.208 --rc genhtml_legend=1 00:22:24.208 --rc geninfo_all_blocks=1 00:22:24.208 --rc geninfo_unexecuted_blocks=1 00:22:24.208 00:22:24.208 ' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.208 --rc genhtml_branch_coverage=1 00:22:24.208 --rc genhtml_function_coverage=1 00:22:24.208 --rc genhtml_legend=1 00:22:24.208 --rc geninfo_all_blocks=1 00:22:24.208 --rc geninfo_unexecuted_blocks=1 00:22:24.208 00:22:24.208 ' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:24.208 10:00:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:24.208 00:22:24.208 real 0m0.206s 00:22:24.208 user 0m0.123s 00:22:24.208 sys 0m0.097s 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:24.209 ************************************ 00:22:24.209 END TEST dma 00:22:24.209 ************************************ 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.209 ************************************ 00:22:24.209 START TEST nvmf_identify 00:22:24.209 ************************************ 00:22:24.209 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:24.468 * Looking for test storage... 00:22:24.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.468 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.468 --rc genhtml_branch_coverage=1 00:22:24.468 --rc genhtml_function_coverage=1 00:22:24.469 --rc genhtml_legend=1 00:22:24.469 --rc geninfo_all_blocks=1 00:22:24.469 --rc geninfo_unexecuted_blocks=1 00:22:24.469 00:22:24.469 ' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.469 --rc genhtml_branch_coverage=1 00:22:24.469 --rc genhtml_function_coverage=1 00:22:24.469 --rc genhtml_legend=1 00:22:24.469 --rc geninfo_all_blocks=1 00:22:24.469 --rc geninfo_unexecuted_blocks=1 00:22:24.469 00:22:24.469 ' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.469 --rc genhtml_branch_coverage=1 00:22:24.469 --rc genhtml_function_coverage=1 00:22:24.469 --rc genhtml_legend=1 00:22:24.469 --rc geninfo_all_blocks=1 00:22:24.469 --rc geninfo_unexecuted_blocks=1 00:22:24.469 00:22:24.469 ' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.469 --rc genhtml_branch_coverage=1 00:22:24.469 --rc genhtml_function_coverage=1 00:22:24.469 --rc genhtml_legend=1 00:22:24.469 --rc geninfo_all_blocks=1 00:22:24.469 --rc geninfo_unexecuted_blocks=1 00:22:24.469 00:22:24.469 ' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.469 10:00:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.055 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.055 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.056 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.056 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.056 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:22:31.056 00:22:31.056 --- 10.0.0.2 ping statistics --- 00:22:31.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.056 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:22:31.056 00:22:31.056 --- 10.0.0.1 ping statistics --- 00:22:31.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.056 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2736555 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2736555 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2736555 ']' 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.056 10:01:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.056 [2024-11-20 10:01:03.927567] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:31.056 [2024-11-20 10:01:03.927614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.056 [2024-11-20 10:01:04.005900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.056 [2024-11-20 10:01:04.049458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.056 [2024-11-20 10:01:04.049497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.056 [2024-11-20 10:01:04.049505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.056 [2024-11-20 10:01:04.049511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.056 [2024-11-20 10:01:04.049516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.056 [2024-11-20 10:01:04.051023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.056 [2024-11-20 10:01:04.051046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.056 [2024-11-20 10:01:04.051077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.056 [2024-11-20 10:01:04.051078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.056 [2024-11-20 10:01:04.148262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.056 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 Malloc0 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 [2024-11-20 10:01:04.252772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.057 [ 00:22:31.057 { 00:22:31.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.057 "subtype": "Discovery", 00:22:31.057 "listen_addresses": [ 00:22:31.057 { 00:22:31.057 "trtype": "TCP", 00:22:31.057 "adrfam": "IPv4", 00:22:31.057 "traddr": "10.0.0.2", 00:22:31.057 "trsvcid": "4420" 00:22:31.057 } 00:22:31.057 ], 00:22:31.057 "allow_any_host": true, 00:22:31.057 "hosts": [] 00:22:31.057 }, 00:22:31.057 { 00:22:31.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.057 "subtype": "NVMe", 00:22:31.057 "listen_addresses": [ 00:22:31.057 { 00:22:31.057 "trtype": "TCP", 00:22:31.057 "adrfam": "IPv4", 00:22:31.057 "traddr": "10.0.0.2", 00:22:31.057 "trsvcid": "4420" 00:22:31.057 } 00:22:31.057 ], 00:22:31.057 "allow_any_host": true, 00:22:31.057 "hosts": [], 00:22:31.057 "serial_number": "SPDK00000000000001", 00:22:31.057 "model_number": "SPDK bdev Controller", 00:22:31.057 "max_namespaces": 32, 00:22:31.057 "min_cntlid": 1, 00:22:31.057 "max_cntlid": 65519, 00:22:31.057 "namespaces": [ 00:22:31.057 { 00:22:31.057 "nsid": 1, 00:22:31.057 "bdev_name": "Malloc0", 00:22:31.057 "name": "Malloc0", 00:22:31.057 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:31.057 "eui64": "ABCDEF0123456789", 00:22:31.057 "uuid": "dbaa3ad5-f697-48af-b685-f136b2413f34" 00:22:31.057 } 00:22:31.057 ] 00:22:31.057 } 00:22:31.057 ] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.057 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:31.057 [2024-11-20 10:01:04.301332] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:31.057 [2024-11-20 10:01:04.301365] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736695 ] 00:22:31.057 [2024-11-20 10:01:04.341729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:31.057 [2024-11-20 10:01:04.341772] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:31.057 [2024-11-20 10:01:04.341777] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:31.057 [2024-11-20 10:01:04.341790] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:31.057 [2024-11-20 10:01:04.341799] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:31.057 [2024-11-20 10:01:04.345520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:31.057 [2024-11-20 10:01:04.345552] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa08690 0 00:22:31.057 [2024-11-20 10:01:04.353216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:31.057 [2024-11-20 10:01:04.353230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:31.057 [2024-11-20 10:01:04.353234] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:31.057 [2024-11-20 10:01:04.353237] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:31.057 [2024-11-20 10:01:04.353268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.353274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.353277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.057 [2024-11-20 10:01:04.353289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:31.057 [2024-11-20 10:01:04.353306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.057 [2024-11-20 10:01:04.359211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.057 [2024-11-20 10:01:04.359220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.057 [2024-11-20 10:01:04.359223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.057 [2024-11-20 10:01:04.359236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:31.057 [2024-11-20 10:01:04.359243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:31.057 [2024-11-20 10:01:04.359247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:31.057 [2024-11-20 10:01:04.359259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.057 [2024-11-20 10:01:04.359276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.057 [2024-11-20 10:01:04.359290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.057 [2024-11-20 10:01:04.359448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.057 [2024-11-20 10:01:04.359454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.057 [2024-11-20 10:01:04.359457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.057 [2024-11-20 10:01:04.359465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:31.057 [2024-11-20 10:01:04.359471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:31.057 [2024-11-20 10:01:04.359477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.057 [2024-11-20 10:01:04.359489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.057 [2024-11-20 10:01:04.359499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.057 [2024-11-20 10:01:04.359561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.057 [2024-11-20 10:01:04.359566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.057 [2024-11-20 10:01:04.359569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.057 [2024-11-20 10:01:04.359577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:31.057 [2024-11-20 10:01:04.359584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:31.057 [2024-11-20 10:01:04.359590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.057 [2024-11-20 10:01:04.359602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.057 [2024-11-20 10:01:04.359611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.057 [2024-11-20 10:01:04.359674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.057 [2024-11-20 10:01:04.359680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.057 [2024-11-20 10:01:04.359683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.057 [2024-11-20 10:01:04.359690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:31.057 [2024-11-20 10:01:04.359699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.057 [2024-11-20 10:01:04.359702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.359705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.359713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.058 [2024-11-20 10:01:04.359722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.359784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.058 [2024-11-20 10:01:04.359790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.058 [2024-11-20 10:01:04.359792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.359795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.058 [2024-11-20 10:01:04.359799] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:31.058 [2024-11-20 10:01:04.359804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:31.058 [2024-11-20 10:01:04.359810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:31.058 [2024-11-20 10:01:04.359918] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:31.058 [2024-11-20 10:01:04.359922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:31.058 [2024-11-20 10:01:04.359930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.359933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.359936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.359942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.058 [2024-11-20 10:01:04.359951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.360025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.058 [2024-11-20 10:01:04.360031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.058 [2024-11-20 10:01:04.360033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.058 [2024-11-20 10:01:04.360040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:31.058 [2024-11-20 10:01:04.360048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.058 [2024-11-20 10:01:04.360069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.360129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.058 [2024-11-20 10:01:04.360134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.058 [2024-11-20 10:01:04.360137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.058 [2024-11-20 10:01:04.360144] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:31.058 [2024-11-20 10:01:04.360148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:31.058 [2024-11-20 10:01:04.360165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.058 [2024-11-20 10:01:04.360191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.360286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.058 [2024-11-20 10:01:04.360292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.058 [2024-11-20 10:01:04.360295] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa08690): datao=0, datal=4096, cccid=0 00:22:31.058 [2024-11-20 10:01:04.360303] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6a100) on tqpair(0xa08690): expected_datao=0, payload_size=4096 00:22:31.058 [2024-11-20 10:01:04.360307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360319] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360323] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.058 [2024-11-20 10:01:04.360358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.058 [2024-11-20 10:01:04.360361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.058 [2024-11-20 10:01:04.360371] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:31.058 [2024-11-20 10:01:04.360375] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:31.058 [2024-11-20 10:01:04.360379] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:31.058 [2024-11-20 10:01:04.360386] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:31.058 [2024-11-20 10:01:04.360389] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:31.058 [2024-11-20 10:01:04.360393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.058 [2024-11-20 10:01:04.360431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.360497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.058 [2024-11-20 10:01:04.360503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.058 [2024-11-20 10:01:04.360508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.058 [2024-11-20 10:01:04.360517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.058 [2024-11-20 10:01:04.360534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.058 [2024-11-20 10:01:04.360549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.058 [2024-11-20 10:01:04.360565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.058 [2024-11-20 10:01:04.360580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:31.058 [2024-11-20 10:01:04.360593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.058 [2024-11-20 10:01:04.360596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa08690) 00:22:31.058 [2024-11-20 10:01:04.360601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.058 [2024-11-20 10:01:04.360612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a100, cid 0, qid 0 00:22:31.058 [2024-11-20 10:01:04.360617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a280, cid 1, qid 0 00:22:31.058 [2024-11-20 10:01:04.360621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a400, cid 2, qid 0 00:22:31.058 [2024-11-20 10:01:04.360625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.058 [2024-11-20 10:01:04.360628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a700, cid 4, qid 0 00:22:31.058 [2024-11-20 10:01:04.360727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.360733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.360736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.360739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a700) on tqpair=0xa08690 00:22:31.059 [2024-11-20 10:01:04.360745] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:31.059 [2024-11-20 10:01:04.360750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:31.059 [2024-11-20 10:01:04.360760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.360764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa08690) 00:22:31.059 [2024-11-20 10:01:04.360769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.059 [2024-11-20 10:01:04.360779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a700, cid 4, qid 0 00:22:31.059 [2024-11-20 10:01:04.360850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.059 [2024-11-20 10:01:04.360856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.059 [2024-11-20 10:01:04.360859] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.360862] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa08690): datao=0, datal=4096, cccid=4 00:22:31.059 [2024-11-20 10:01:04.360866] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6a700) on tqpair(0xa08690): expected_datao=0, payload_size=4096 00:22:31.059 [2024-11-20 10:01:04.360870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.360879] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.360883] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.401385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.401389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a700) on tqpair=0xa08690 00:22:31.059 [2024-11-20 10:01:04.401407] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:31.059 [2024-11-20 10:01:04.401429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa08690) 00:22:31.059 [2024-11-20 10:01:04.401442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.059 [2024-11-20 10:01:04.401449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa08690) 00:22:31.059 [2024-11-20 10:01:04.401460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.059 [2024-11-20 10:01:04.401477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a700, cid 4, qid 0 00:22:31.059 [2024-11-20 10:01:04.401482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a880, cid 5, qid 0 00:22:31.059 [2024-11-20 10:01:04.401588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.059 [2024-11-20 10:01:04.401594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.059 [2024-11-20 10:01:04.401597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401601] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa08690): datao=0, datal=1024, cccid=4 00:22:31.059 [2024-11-20 10:01:04.401605] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6a700) on tqpair(0xa08690): expected_datao=0, payload_size=1024 00:22:31.059 [2024-11-20 10:01:04.401609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401614] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401617] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.401630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.401633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.401636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a880) on tqpair=0xa08690 00:22:31.059 [2024-11-20 10:01:04.442277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.442286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.442289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a700) on tqpair=0xa08690 00:22:31.059 [2024-11-20 10:01:04.442303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa08690) 00:22:31.059 [2024-11-20 10:01:04.442313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.059 [2024-11-20 10:01:04.442327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a700, cid 4, qid 0 00:22:31.059 [2024-11-20 10:01:04.442396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.059 [2024-11-20 10:01:04.442401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.059 [2024-11-20 10:01:04.442404] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442408] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa08690): datao=0, datal=3072, cccid=4 00:22:31.059 [2024-11-20 10:01:04.442411] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6a700) on tqpair(0xa08690): expected_datao=0, payload_size=3072 00:22:31.059 [2024-11-20 10:01:04.442415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442434] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.442472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.442475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a700) on tqpair=0xa08690 00:22:31.059 [2024-11-20 10:01:04.442485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa08690) 00:22:31.059 [2024-11-20 10:01:04.442494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.059 [2024-11-20 10:01:04.442507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a700, cid 4, qid 0 00:22:31.059 [2024-11-20 10:01:04.442579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.059 [2024-11-20 10:01:04.442585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.059 [2024-11-20 10:01:04.442588] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa08690): datao=0, datal=8, cccid=4 00:22:31.059 [2024-11-20 10:01:04.442594] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6a700) on tqpair(0xa08690): expected_datao=0, payload_size=8 00:22:31.059 [2024-11-20 10:01:04.442598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442603] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.442606] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.487211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.059 [2024-11-20 10:01:04.487221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.059 [2024-11-20 10:01:04.487227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.059 [2024-11-20 10:01:04.487231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a700) on tqpair=0xa08690 00:22:31.059 ===================================================== 00:22:31.059 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:31.059 ===================================================== 00:22:31.059 Controller Capabilities/Features 00:22:31.059 ================================ 00:22:31.059 Vendor ID: 0000 00:22:31.059 Subsystem Vendor ID: 0000 00:22:31.059 Serial Number: .................... 00:22:31.059 Model Number: ........................................ 00:22:31.059 Firmware Version: 25.01 00:22:31.059 Recommended Arb Burst: 0 00:22:31.059 IEEE OUI Identifier: 00 00 00 00:22:31.059 Multi-path I/O 00:22:31.059 May have multiple subsystem ports: No 00:22:31.059 May have multiple controllers: No 00:22:31.059 Associated with SR-IOV VF: No 00:22:31.059 Max Data Transfer Size: 131072 00:22:31.059 Max Number of Namespaces: 0 00:22:31.059 Max Number of I/O Queues: 1024 00:22:31.059 NVMe Specification Version (VS): 1.3 00:22:31.059 NVMe Specification Version (Identify): 1.3 00:22:31.059 Maximum Queue Entries: 128 00:22:31.059 Contiguous Queues Required: Yes 00:22:31.059 Arbitration Mechanisms Supported 00:22:31.059 Weighted Round Robin: Not Supported 00:22:31.059 Vendor Specific: Not Supported 00:22:31.059 Reset Timeout: 15000 ms 00:22:31.059 Doorbell Stride: 4 bytes 00:22:31.059 NVM Subsystem Reset: Not Supported 00:22:31.059 Command Sets Supported 00:22:31.059 NVM Command Set: Supported 00:22:31.059 Boot Partition: Not Supported 00:22:31.059 Memory Page Size Minimum: 4096 bytes 00:22:31.059 Memory Page Size Maximum: 4096 bytes 00:22:31.059 Persistent Memory Region: Not Supported 00:22:31.059 Optional Asynchronous Events Supported 00:22:31.059 Namespace Attribute Notices: Not Supported 00:22:31.059 Firmware Activation Notices: Not Supported 00:22:31.059 ANA Change Notices: Not Supported 00:22:31.060 PLE Aggregate Log Change Notices: Not Supported 00:22:31.060 LBA Status Info Alert Notices: Not Supported 00:22:31.060 EGE Aggregate Log Change Notices: Not Supported 00:22:31.060 Normal NVM Subsystem Shutdown event: Not Supported 00:22:31.060 Zone Descriptor Change Notices: Not Supported 00:22:31.060 Discovery Log Change Notices: Supported 00:22:31.060 Controller Attributes 00:22:31.060 128-bit Host Identifier: Not Supported 00:22:31.060 Non-Operational Permissive Mode: Not Supported 00:22:31.060 NVM Sets: Not Supported 00:22:31.060 Read Recovery Levels: Not Supported 00:22:31.060 Endurance Groups: Not Supported 00:22:31.060 Predictable Latency Mode: Not Supported 00:22:31.060 Traffic Based Keep ALive: Not Supported 00:22:31.060 Namespace Granularity: Not Supported 00:22:31.060 SQ Associations: Not Supported 00:22:31.060 UUID List: Not Supported 00:22:31.060 Multi-Domain Subsystem: Not Supported 00:22:31.060 Fixed Capacity Management: Not Supported 00:22:31.060 Variable Capacity Management: Not Supported 00:22:31.060 Delete Endurance Group: Not Supported 00:22:31.060 Delete NVM Set: Not Supported 00:22:31.060 Extended LBA Formats Supported: Not Supported 00:22:31.060 Flexible Data Placement Supported: Not Supported 00:22:31.060 00:22:31.060 Controller Memory Buffer Support 00:22:31.060 ================================ 00:22:31.060 Supported: No 00:22:31.060 00:22:31.060 Persistent Memory Region Support 00:22:31.060 ================================ 00:22:31.060 Supported: No 00:22:31.060 00:22:31.060 Admin Command Set Attributes 00:22:31.060 ============================ 00:22:31.060 Security Send/Receive: Not Supported 00:22:31.060 Format NVM: Not Supported 00:22:31.060 Firmware Activate/Download: Not Supported 00:22:31.060 Namespace Management: Not Supported 00:22:31.060 Device Self-Test: Not Supported 00:22:31.060 Directives: Not Supported 00:22:31.060 NVMe-MI: Not Supported 00:22:31.060 Virtualization Management: Not Supported 00:22:31.060 Doorbell Buffer Config: Not Supported 00:22:31.060 Get LBA Status Capability: Not Supported 00:22:31.060 Command & Feature Lockdown Capability: Not Supported 00:22:31.060 Abort Command Limit: 1 00:22:31.060 Async Event Request Limit: 4 00:22:31.060 Number of Firmware Slots: N/A 00:22:31.060 Firmware Slot 1 Read-Only: N/A 00:22:31.060 Firmware Activation Without Reset: N/A 00:22:31.060 Multiple Update Detection Support: N/A 00:22:31.060 Firmware Update Granularity: No Information Provided 00:22:31.060 Per-Namespace SMART Log: No 00:22:31.060 Asymmetric Namespace Access Log Page: Not Supported 00:22:31.060 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:31.060 Command Effects Log Page: Not Supported 00:22:31.060 Get Log Page Extended Data: Supported 00:22:31.060 Telemetry Log Pages: Not Supported 00:22:31.060 Persistent Event Log Pages: Not Supported 00:22:31.060 Supported Log Pages Log Page: May Support 00:22:31.060 Commands Supported & Effects Log Page: Not Supported 00:22:31.060 Feature Identifiers & Effects Log Page:May Support 00:22:31.060 NVMe-MI Commands & Effects Log Page: May Support 00:22:31.060 Data Area 4 for Telemetry Log: Not Supported 00:22:31.060 Error Log Page Entries Supported: 128 00:22:31.060 Keep Alive: Not Supported 00:22:31.060 00:22:31.060 NVM Command Set Attributes 00:22:31.060 ========================== 00:22:31.060 Submission Queue Entry Size 00:22:31.060 Max: 1 00:22:31.060 Min: 1 00:22:31.060 Completion Queue Entry Size 00:22:31.060 Max: 1 00:22:31.060 Min: 1 00:22:31.060 Number of Namespaces: 0 00:22:31.060 Compare Command: Not Supported 00:22:31.060 Write Uncorrectable Command: Not Supported 00:22:31.060 Dataset Management Command: Not Supported 00:22:31.060 Write Zeroes Command: Not Supported 00:22:31.060 Set Features Save Field: Not Supported 00:22:31.060 Reservations: Not Supported 00:22:31.060 Timestamp: Not Supported 00:22:31.060 Copy: Not Supported 00:22:31.060 Volatile Write Cache: Not Present 00:22:31.060 Atomic Write Unit (Normal): 1 00:22:31.060 Atomic Write Unit (PFail): 1 00:22:31.060 Atomic Compare & Write Unit: 1 00:22:31.060 Fused Compare & Write: Supported 00:22:31.060 Scatter-Gather List 00:22:31.060 SGL Command Set: Supported 00:22:31.060 SGL Keyed: Supported 00:22:31.060 SGL Bit Bucket Descriptor: Not Supported 00:22:31.060 SGL Metadata Pointer: Not Supported 00:22:31.060 Oversized SGL: Not Supported 00:22:31.060 SGL Metadata Address: Not Supported 00:22:31.060 SGL Offset: Supported 00:22:31.060 Transport SGL Data Block: Not Supported 00:22:31.060 Replay Protected Memory Block: Not Supported 00:22:31.060 00:22:31.060 Firmware Slot Information 00:22:31.060 ========================= 00:22:31.060 Active slot: 0 00:22:31.060 00:22:31.060 00:22:31.060 Error Log 00:22:31.060 ========= 00:22:31.060 00:22:31.060 Active Namespaces 00:22:31.060 ================= 00:22:31.060 Discovery Log Page 00:22:31.060 ================== 00:22:31.060 Generation Counter: 2 00:22:31.060 Number of Records: 2 00:22:31.060 Record Format: 0 00:22:31.060 00:22:31.060 Discovery Log Entry 0 00:22:31.060 ---------------------- 00:22:31.060 Transport Type: 3 (TCP) 00:22:31.060 Address Family: 1 (IPv4) 00:22:31.060 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:31.060 Entry Flags: 00:22:31.060 Duplicate Returned Information: 1 00:22:31.060 Explicit Persistent Connection Support for Discovery: 1 00:22:31.060 Transport Requirements: 00:22:31.060 Secure Channel: Not Required 00:22:31.060 Port ID: 0 (0x0000) 00:22:31.060 Controller ID: 65535 (0xffff) 00:22:31.060 Admin Max SQ Size: 128 00:22:31.060 Transport Service Identifier: 4420 00:22:31.060 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:31.060 Transport Address: 10.0.0.2 00:22:31.060 Discovery Log Entry 1 00:22:31.060 ---------------------- 00:22:31.060 Transport Type: 3 (TCP) 00:22:31.060 Address Family: 1 (IPv4) 00:22:31.060 Subsystem Type: 2 (NVM Subsystem) 00:22:31.060 Entry Flags: 00:22:31.060 Duplicate Returned Information: 0 00:22:31.060 Explicit Persistent Connection Support for Discovery: 0 00:22:31.060 Transport Requirements: 00:22:31.060 Secure Channel: Not Required 00:22:31.060 Port ID: 0 (0x0000) 00:22:31.060 Controller ID: 65535 (0xffff) 00:22:31.060 Admin Max SQ Size: 128 00:22:31.060 Transport Service Identifier: 4420 00:22:31.060 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:31.060 Transport Address: 10.0.0.2 [2024-11-20 10:01:04.487310] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:31.060 [2024-11-20 10:01:04.487321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a100) on tqpair=0xa08690 00:22:31.060 [2024-11-20 10:01:04.487327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.060 [2024-11-20 10:01:04.487332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a280) on tqpair=0xa08690 00:22:31.060 [2024-11-20 10:01:04.487337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.060 [2024-11-20 10:01:04.487341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a400) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.061 [2024-11-20 10:01:04.487350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.061 [2024-11-20 10:01:04.487366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.487457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.487463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.487468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.487574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.487580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.487583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487591] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:31.061 [2024-11-20 10:01:04.487594] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:31.061 [2024-11-20 10:01:04.487602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.487691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.487697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.487700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.487808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.487814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.487816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.487925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.487931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.487933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.487945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.487951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.487957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.487966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.488058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.488069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.488161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.488172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.488278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.488288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.488395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.488404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.061 [2024-11-20 10:01:04.488504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.061 [2024-11-20 10:01:04.488514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.061 [2024-11-20 10:01:04.488573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.061 [2024-11-20 10:01:04.488579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.061 [2024-11-20 10:01:04.488582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.061 [2024-11-20 10:01:04.488585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.061 [2024-11-20 10:01:04.488593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.488606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.488615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.488690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.488696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.488699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.488710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.488722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.488731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.488807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.488813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.488816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.488827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.488839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.488848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.488915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.488920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.488923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.488935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.488942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.488947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.488956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.062 [2024-11-20 10:01:04.489773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.062 [2024-11-20 10:01:04.489779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.062 [2024-11-20 10:01:04.489785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.062 [2024-11-20 10:01:04.489794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.062 [2024-11-20 10:01:04.489860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.062 [2024-11-20 10:01:04.489866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.062 [2024-11-20 10:01:04.489869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.489872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.489881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.489885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.489888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.489893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.489902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.489977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.489983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.489986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.489989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.489997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.490892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.490902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.490979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.490985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.490988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.490991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.490999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.491002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.491005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.491010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.063 [2024-11-20 10:01:04.491020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.063 [2024-11-20 10:01:04.491084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.063 [2024-11-20 10:01:04.491090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.063 [2024-11-20 10:01:04.491093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.491096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.063 [2024-11-20 10:01:04.491105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.491108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.063 [2024-11-20 10:01:04.491111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.063 [2024-11-20 10:01:04.491117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.491126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.064 [2024-11-20 10:01:04.491188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.491193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.491196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.491199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.064 [2024-11-20 10:01:04.495219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.495224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.495227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa08690) 00:22:31.064 [2024-11-20 10:01:04.495235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.495247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6a580, cid 3, qid 0 00:22:31.064 [2024-11-20 10:01:04.495375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.495381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.495384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.495387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6a580) on tqpair=0xa08690 00:22:31.064 [2024-11-20 10:01:04.495393] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:31.064 00:22:31.064 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:31.064 [2024-11-20 10:01:04.533623] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:31.064 [2024-11-20 10:01:04.533671] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736803 ] 00:22:31.064 [2024-11-20 10:01:04.573400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:31.064 [2024-11-20 10:01:04.573441] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:31.064 [2024-11-20 10:01:04.573446] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:31.064 [2024-11-20 10:01:04.573456] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:31.064 [2024-11-20 10:01:04.573464] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:31.064 [2024-11-20 10:01:04.577376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:31.064 [2024-11-20 10:01:04.577401] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x74d690 0 00:22:31.064 [2024-11-20 10:01:04.585214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:31.064 [2024-11-20 10:01:04.585228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:31.064 [2024-11-20 10:01:04.585232] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:31.064 [2024-11-20 10:01:04.585235] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:31.064 [2024-11-20 10:01:04.585260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.585265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.585268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.585278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:31.064 [2024-11-20 10:01:04.585295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.064 [2024-11-20 10:01:04.593213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.593221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.593224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.064 [2024-11-20 10:01:04.593238] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:31.064 [2024-11-20 10:01:04.593246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:31.064 [2024-11-20 10:01:04.593251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:31.064 [2024-11-20 10:01:04.593261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.593274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.593287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.064 [2024-11-20 10:01:04.593447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.593453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.593456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.064 [2024-11-20 10:01:04.593463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:31.064 [2024-11-20 10:01:04.593470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:31.064 [2024-11-20 10:01:04.593476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.593488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.593498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.064 [2024-11-20 10:01:04.593596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.593601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.593604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.064 [2024-11-20 10:01:04.593612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:31.064 [2024-11-20 10:01:04.593619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:31.064 [2024-11-20 10:01:04.593624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.593636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.593646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.064 [2024-11-20 10:01:04.593746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.593751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.593754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.064 [2024-11-20 10:01:04.593762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:31.064 [2024-11-20 10:01:04.593772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.593784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.593794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.064 [2024-11-20 10:01:04.593858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.064 [2024-11-20 10:01:04.593863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.064 [2024-11-20 10:01:04.593866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.593870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.064 [2024-11-20 10:01:04.593874] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:31.064 [2024-11-20 10:01:04.593878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:31.064 [2024-11-20 10:01:04.593884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:31.064 [2024-11-20 10:01:04.593992] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:31.064 [2024-11-20 10:01:04.593996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:31.064 [2024-11-20 10:01:04.594003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.594006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.064 [2024-11-20 10:01:04.594009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.064 [2024-11-20 10:01:04.594014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.064 [2024-11-20 10:01:04.594024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.065 [2024-11-20 10:01:04.594089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.065 [2024-11-20 10:01:04.594095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.065 [2024-11-20 10:01:04.594098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.065 [2024-11-20 10:01:04.594105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:31.065 [2024-11-20 10:01:04.594113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.065 [2024-11-20 10:01:04.594125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.065 [2024-11-20 10:01:04.594134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.065 [2024-11-20 10:01:04.594240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.065 [2024-11-20 10:01:04.594246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.065 [2024-11-20 10:01:04.594249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.065 [2024-11-20 10:01:04.594256] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:31.065 [2024-11-20 10:01:04.594263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:31.065 [2024-11-20 10:01:04.594271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:31.065 [2024-11-20 10:01:04.594279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:31.065 [2024-11-20 10:01:04.594287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.065 [2024-11-20 10:01:04.594296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.065 [2024-11-20 10:01:04.594306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.065 [2024-11-20 10:01:04.594425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.065 [2024-11-20 10:01:04.594430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.065 [2024-11-20 10:01:04.594433] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594436] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=4096, cccid=0 00:22:31.065 [2024-11-20 10:01:04.594440] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af100) on tqpair(0x74d690): expected_datao=0, payload_size=4096 00:22:31.065 [2024-11-20 10:01:04.594444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.065 [2024-11-20 10:01:04.594458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.328 [2024-11-20 10:01:04.635396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.328 [2024-11-20 10:01:04.635399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.328 [2024-11-20 10:01:04.635409] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:31.328 [2024-11-20 10:01:04.635413] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:31.328 [2024-11-20 10:01:04.635417] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:31.328 [2024-11-20 10:01:04.635424] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:31.328 [2024-11-20 10:01:04.635428] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:31.328 [2024-11-20 10:01:04.635432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.328 [2024-11-20 10:01:04.635474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.328 [2024-11-20 10:01:04.635586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.328 [2024-11-20 10:01:04.635593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.328 [2024-11-20 10:01:04.635596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.328 [2024-11-20 10:01:04.635605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.328 [2024-11-20 10:01:04.635622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.328 [2024-11-20 10:01:04.635638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.328 [2024-11-20 10:01:04.635654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.328 [2024-11-20 10:01:04.635669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.328 [2024-11-20 10:01:04.635702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af100, cid 0, qid 0 00:22:31.328 [2024-11-20 10:01:04.635707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af280, cid 1, qid 0 00:22:31.328 [2024-11-20 10:01:04.635711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af400, cid 2, qid 0 00:22:31.328 [2024-11-20 10:01:04.635715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.328 [2024-11-20 10:01:04.635719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.328 [2024-11-20 10:01:04.635814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.328 [2024-11-20 10:01:04.635820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.328 [2024-11-20 10:01:04.635823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.328 [2024-11-20 10:01:04.635832] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:31.328 [2024-11-20 10:01:04.635839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.635858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.635864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.635869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.328 [2024-11-20 10:01:04.635879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.328 [2024-11-20 10:01:04.635988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.328 [2024-11-20 10:01:04.635994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.328 [2024-11-20 10:01:04.635997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.328 [2024-11-20 10:01:04.636051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.636061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.636067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.636076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.328 [2024-11-20 10:01:04.636086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.328 [2024-11-20 10:01:04.636164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.328 [2024-11-20 10:01:04.636170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.328 [2024-11-20 10:01:04.636173] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=4096, cccid=4 00:22:31.328 [2024-11-20 10:01:04.636180] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af700) on tqpair(0x74d690): expected_datao=0, payload_size=4096 00:22:31.328 [2024-11-20 10:01:04.636184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636190] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636193] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.328 [2024-11-20 10:01:04.636245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.328 [2024-11-20 10:01:04.636248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.328 [2024-11-20 10:01:04.636258] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:31.328 [2024-11-20 10:01:04.636270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.636278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:31.328 [2024-11-20 10:01:04.636286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.328 [2024-11-20 10:01:04.636289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.328 [2024-11-20 10:01:04.636295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.328 [2024-11-20 10:01:04.636306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.328 [2024-11-20 10:01:04.636389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.636394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.636397] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636400] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=4096, cccid=4 00:22:31.329 [2024-11-20 10:01:04.636404] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af700) on tqpair(0x74d690): expected_datao=0, payload_size=4096 00:22:31.329 [2024-11-20 10:01:04.636408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636416] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.636446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.636449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.636462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.636485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.636495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.329 [2024-11-20 10:01:04.636570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.636576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.636579] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636582] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=4096, cccid=4 00:22:31.329 [2024-11-20 10:01:04.636586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af700) on tqpair(0x74d690): expected_datao=0, payload_size=4096 00:22:31.329 [2024-11-20 10:01:04.636590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636595] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636598] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.636647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.636650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.636661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636693] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:31.329 [2024-11-20 10:01:04.636698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:31.329 [2024-11-20 10:01:04.636702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:31.329 [2024-11-20 10:01:04.636714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.636723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.636729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.636740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.329 [2024-11-20 10:01:04.636752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.329 [2024-11-20 10:01:04.636756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af880, cid 5, qid 0 00:22:31.329 [2024-11-20 10:01:04.636872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.636877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.636880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.636889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.636894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.636897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af880) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.636908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.636911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.636916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.636926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af880, cid 5, qid 0 00:22:31.329 [2024-11-20 10:01:04.637021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.637027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.637032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.637035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af880) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.637043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.637046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.637052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.637061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af880, cid 5, qid 0 00:22:31.329 [2024-11-20 10:01:04.637128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.637134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.637137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.637140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af880) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.637148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.637151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.637156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.637165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af880, cid 5, qid 0 00:22:31.329 [2024-11-20 10:01:04.641208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.641215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.641219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af880) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.641236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.641245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.641251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.641260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.641266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.641274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.641280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74d690) 00:22:31.329 [2024-11-20 10:01:04.641289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.329 [2024-11-20 10:01:04.641300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af880, cid 5, qid 0 00:22:31.329 [2024-11-20 10:01:04.641305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af700, cid 4, qid 0 00:22:31.329 [2024-11-20 10:01:04.641309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7afa00, cid 6, qid 0 00:22:31.329 [2024-11-20 10:01:04.641316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7afb80, cid 7, qid 0 00:22:31.329 [2024-11-20 10:01:04.641543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.641549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.641552] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641555] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=8192, cccid=5 00:22:31.329 [2024-11-20 10:01:04.641559] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af880) on tqpair(0x74d690): expected_datao=0, payload_size=8192 00:22:31.329 [2024-11-20 10:01:04.641563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641585] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641589] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.641602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.641605] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641608] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=512, cccid=4 00:22:31.329 [2024-11-20 10:01:04.641612] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7af700) on tqpair(0x74d690): expected_datao=0, payload_size=512 00:22:31.329 [2024-11-20 10:01:04.641615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641620] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641624] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.641633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.641636] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641639] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=512, cccid=6 00:22:31.329 [2024-11-20 10:01:04.641643] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7afa00) on tqpair(0x74d690): expected_datao=0, payload_size=512 00:22:31.329 [2024-11-20 10:01:04.641646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641651] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641655] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:31.329 [2024-11-20 10:01:04.641664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:31.329 [2024-11-20 10:01:04.641667] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641670] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74d690): datao=0, datal=4096, cccid=7 00:22:31.329 [2024-11-20 10:01:04.641673] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7afb80) on tqpair(0x74d690): expected_datao=0, payload_size=4096 00:22:31.329 [2024-11-20 10:01:04.641677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641686] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.641701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.641703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af880) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.641716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.641722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.641726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af700) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.641738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.641743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.641746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7afa00) on tqpair=0x74d690 00:22:31.329 [2024-11-20 10:01:04.641754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.329 [2024-11-20 10:01:04.641759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.329 [2024-11-20 10:01:04.641762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.329 [2024-11-20 10:01:04.641766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7afb80) on tqpair=0x74d690 00:22:31.329 ===================================================== 00:22:31.329 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.329 ===================================================== 00:22:31.329 Controller Capabilities/Features 00:22:31.329 ================================ 00:22:31.329 Vendor ID: 8086 00:22:31.329 Subsystem Vendor ID: 8086 00:22:31.329 Serial Number: SPDK00000000000001 00:22:31.329 Model Number: SPDK bdev Controller 00:22:31.329 Firmware Version: 25.01 00:22:31.329 Recommended Arb Burst: 6 00:22:31.329 IEEE OUI Identifier: e4 d2 5c 00:22:31.329 Multi-path I/O 00:22:31.329 May have multiple subsystem ports: Yes 00:22:31.329 May have multiple controllers: Yes 00:22:31.329 Associated with SR-IOV VF: No 00:22:31.329 Max Data Transfer Size: 131072 00:22:31.329 Max Number of Namespaces: 32 00:22:31.329 Max Number of I/O Queues: 127 00:22:31.330 NVMe Specification Version (VS): 1.3 00:22:31.330 NVMe Specification Version (Identify): 1.3 00:22:31.330 Maximum Queue Entries: 128 00:22:31.330 Contiguous Queues Required: Yes 00:22:31.330 Arbitration Mechanisms Supported 00:22:31.330 Weighted Round Robin: Not Supported 00:22:31.330 Vendor Specific: Not Supported 00:22:31.330 Reset Timeout: 15000 ms 00:22:31.330 Doorbell Stride: 4 bytes 00:22:31.330 NVM Subsystem Reset: Not Supported 00:22:31.330 Command Sets Supported 00:22:31.330 NVM Command Set: Supported 00:22:31.330 Boot Partition: Not Supported 00:22:31.330 Memory Page Size Minimum: 4096 bytes 00:22:31.330 Memory Page Size Maximum: 4096 bytes 00:22:31.330 Persistent Memory Region: Not Supported 00:22:31.330 Optional Asynchronous Events Supported 00:22:31.330 Namespace Attribute Notices: Supported 00:22:31.330 Firmware Activation Notices: Not Supported 00:22:31.330 ANA Change Notices: Not Supported 00:22:31.330 PLE Aggregate Log Change Notices: Not Supported 00:22:31.330 LBA Status Info Alert Notices: Not Supported 00:22:31.330 EGE Aggregate Log Change Notices: Not Supported 00:22:31.330 Normal NVM Subsystem Shutdown event: Not Supported 00:22:31.330 Zone Descriptor Change Notices: Not Supported 00:22:31.330 Discovery Log Change Notices: Not Supported 00:22:31.330 Controller Attributes 00:22:31.330 128-bit Host Identifier: Supported 00:22:31.330 Non-Operational Permissive Mode: Not Supported 00:22:31.330 NVM Sets: Not Supported 00:22:31.330 Read Recovery Levels: Not Supported 00:22:31.330 Endurance Groups: Not Supported 00:22:31.330 Predictable Latency Mode: Not Supported 00:22:31.330 Traffic Based Keep ALive: Not Supported 00:22:31.330 Namespace Granularity: Not Supported 00:22:31.330 SQ Associations: Not Supported 00:22:31.330 UUID List: Not Supported 00:22:31.330 Multi-Domain Subsystem: Not Supported 00:22:31.330 Fixed Capacity Management: Not Supported 00:22:31.330 Variable Capacity Management: Not Supported 00:22:31.330 Delete Endurance Group: Not Supported 00:22:31.330 Delete NVM Set: Not Supported 00:22:31.330 Extended LBA Formats Supported: Not Supported 00:22:31.330 Flexible Data Placement Supported: Not Supported 00:22:31.330 00:22:31.330 Controller Memory Buffer Support 00:22:31.330 ================================ 00:22:31.330 Supported: No 00:22:31.330 00:22:31.330 Persistent Memory Region Support 00:22:31.330 ================================ 00:22:31.330 Supported: No 00:22:31.330 00:22:31.330 Admin Command Set Attributes 00:22:31.330 ============================ 00:22:31.330 Security Send/Receive: Not Supported 00:22:31.330 Format NVM: Not Supported 00:22:31.330 Firmware Activate/Download: Not Supported 00:22:31.330 Namespace Management: Not Supported 00:22:31.330 Device Self-Test: Not Supported 00:22:31.330 Directives: Not Supported 00:22:31.330 NVMe-MI: Not Supported 00:22:31.330 Virtualization Management: Not Supported 00:22:31.330 Doorbell Buffer Config: Not Supported 00:22:31.330 Get LBA Status Capability: Not Supported 00:22:31.330 Command & Feature Lockdown Capability: Not Supported 00:22:31.330 Abort Command Limit: 4 00:22:31.330 Async Event Request Limit: 4 00:22:31.330 Number of Firmware Slots: N/A 00:22:31.330 Firmware Slot 1 Read-Only: N/A 00:22:31.330 Firmware Activation Without Reset: N/A 00:22:31.330 Multiple Update Detection Support: N/A 00:22:31.330 Firmware Update Granularity: No Information Provided 00:22:31.330 Per-Namespace SMART Log: No 00:22:31.330 Asymmetric Namespace Access Log Page: Not Supported 00:22:31.330 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:31.330 Command Effects Log Page: Supported 00:22:31.330 Get Log Page Extended Data: Supported 00:22:31.330 Telemetry Log Pages: Not Supported 00:22:31.330 Persistent Event Log Pages: Not Supported 00:22:31.330 Supported Log Pages Log Page: May Support 00:22:31.330 Commands Supported & Effects Log Page: Not Supported 00:22:31.330 Feature Identifiers & Effects Log Page:May Support 00:22:31.330 NVMe-MI Commands & Effects Log Page: May Support 00:22:31.330 Data Area 4 for Telemetry Log: Not Supported 00:22:31.330 Error Log Page Entries Supported: 128 00:22:31.330 Keep Alive: Supported 00:22:31.330 Keep Alive Granularity: 10000 ms 00:22:31.330 00:22:31.330 NVM Command Set Attributes 00:22:31.330 ========================== 00:22:31.330 Submission Queue Entry Size 00:22:31.330 Max: 64 00:22:31.330 Min: 64 00:22:31.330 Completion Queue Entry Size 00:22:31.330 Max: 16 00:22:31.330 Min: 16 00:22:31.330 Number of Namespaces: 32 00:22:31.330 Compare Command: Supported 00:22:31.330 Write Uncorrectable Command: Not Supported 00:22:31.330 Dataset Management Command: Supported 00:22:31.330 Write Zeroes Command: Supported 00:22:31.330 Set Features Save Field: Not Supported 00:22:31.330 Reservations: Supported 00:22:31.330 Timestamp: Not Supported 00:22:31.330 Copy: Supported 00:22:31.330 Volatile Write Cache: Present 00:22:31.330 Atomic Write Unit (Normal): 1 00:22:31.330 Atomic Write Unit (PFail): 1 00:22:31.330 Atomic Compare & Write Unit: 1 00:22:31.330 Fused Compare & Write: Supported 00:22:31.330 Scatter-Gather List 00:22:31.330 SGL Command Set: Supported 00:22:31.330 SGL Keyed: Supported 00:22:31.330 SGL Bit Bucket Descriptor: Not Supported 00:22:31.330 SGL Metadata Pointer: Not Supported 00:22:31.330 Oversized SGL: Not Supported 00:22:31.330 SGL Metadata Address: Not Supported 00:22:31.330 SGL Offset: Supported 00:22:31.330 Transport SGL Data Block: Not Supported 00:22:31.330 Replay Protected Memory Block: Not Supported 00:22:31.330 00:22:31.330 Firmware Slot Information 00:22:31.330 ========================= 00:22:31.330 Active slot: 1 00:22:31.330 Slot 1 Firmware Revision: 25.01 00:22:31.330 00:22:31.330 00:22:31.330 Commands Supported and Effects 00:22:31.330 ============================== 00:22:31.330 Admin Commands 00:22:31.330 -------------- 00:22:31.330 Get Log Page (02h): Supported 00:22:31.330 Identify (06h): Supported 00:22:31.330 Abort (08h): Supported 00:22:31.330 Set Features (09h): Supported 00:22:31.330 Get Features (0Ah): Supported 00:22:31.330 Asynchronous Event Request (0Ch): Supported 00:22:31.330 Keep Alive (18h): Supported 00:22:31.330 I/O Commands 00:22:31.330 ------------ 00:22:31.330 Flush (00h): Supported LBA-Change 00:22:31.330 Write (01h): Supported LBA-Change 00:22:31.330 Read (02h): Supported 00:22:31.330 Compare (05h): Supported 00:22:31.330 Write Zeroes (08h): Supported LBA-Change 00:22:31.330 Dataset Management (09h): Supported LBA-Change 00:22:31.330 Copy (19h): Supported LBA-Change 00:22:31.330 00:22:31.330 Error Log 00:22:31.330 ========= 00:22:31.330 00:22:31.330 Arbitration 00:22:31.330 =========== 00:22:31.330 Arbitration Burst: 1 00:22:31.330 00:22:31.330 Power Management 00:22:31.330 ================ 00:22:31.330 Number of Power States: 1 00:22:31.330 Current Power State: Power State #0 00:22:31.330 Power State #0: 00:22:31.330 Max Power: 0.00 W 00:22:31.330 Non-Operational State: Operational 00:22:31.330 Entry Latency: Not Reported 00:22:31.330 Exit Latency: Not Reported 00:22:31.330 Relative Read Throughput: 0 00:22:31.330 Relative Read Latency: 0 00:22:31.330 Relative Write Throughput: 0 00:22:31.330 Relative Write Latency: 0 00:22:31.330 Idle Power: Not Reported 00:22:31.330 Active Power: Not Reported 00:22:31.330 Non-Operational Permissive Mode: Not Supported 00:22:31.330 00:22:31.330 Health Information 00:22:31.330 ================== 00:22:31.330 Critical Warnings: 00:22:31.330 Available Spare Space: OK 00:22:31.330 Temperature: OK 00:22:31.330 Device Reliability: OK 00:22:31.330 Read Only: No 00:22:31.330 Volatile Memory Backup: OK 00:22:31.330 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:31.330 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:31.330 Available Spare: 0% 00:22:31.330 Available Spare Threshold: 0% 00:22:31.330 Life Percentage Used:[2024-11-20 10:01:04.641846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.641851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74d690) 00:22:31.330 [2024-11-20 10:01:04.641857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-20 10:01:04.641868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7afb80, cid 7, qid 0 00:22:31.330 [2024-11-20 10:01:04.641984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.330 [2024-11-20 10:01:04.641990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.330 [2024-11-20 10:01:04.641993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.641996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7afb80) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642022] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:31.330 [2024-11-20 10:01:04.642030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af100) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.330 [2024-11-20 10:01:04.642040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af280) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.330 [2024-11-20 10:01:04.642048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af400) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.330 [2024-11-20 10:01:04.642056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.330 [2024-11-20 10:01:04.642067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.330 [2024-11-20 10:01:04.642079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-20 10:01:04.642090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.330 [2024-11-20 10:01:04.642156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.330 [2024-11-20 10:01:04.642161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.330 [2024-11-20 10:01:04.642164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.330 [2024-11-20 10:01:04.642188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-20 10:01:04.642200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.330 [2024-11-20 10:01:04.642357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.330 [2024-11-20 10:01:04.642363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.330 [2024-11-20 10:01:04.642366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642373] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:31.330 [2024-11-20 10:01:04.642377] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:31.330 [2024-11-20 10:01:04.642385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.330 [2024-11-20 10:01:04.642397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-20 10:01:04.642407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.330 [2024-11-20 10:01:04.642472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.330 [2024-11-20 10:01:04.642477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.330 [2024-11-20 10:01:04.642480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.330 [2024-11-20 10:01:04.642504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.330 [2024-11-20 10:01:04.642513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.330 [2024-11-20 10:01:04.642608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.330 [2024-11-20 10:01:04.642613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.330 [2024-11-20 10:01:04.642616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.330 [2024-11-20 10:01:04.642628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.330 [2024-11-20 10:01:04.642634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.642640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.642649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.642708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.642715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.642718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.642730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.642742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.642751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.642859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.642865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.642868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.642879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.642891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.642901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.642961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.642966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.642969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.642980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.642987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.642992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.643918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.643924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.643927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.643938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.643944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.643950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.643959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.644054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.644063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.644207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.644217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.644356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.644365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.644504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.644513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.331 [2024-11-20 10:01:04.644605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.331 [2024-11-20 10:01:04.644614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.331 [2024-11-20 10:01:04.644676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.331 [2024-11-20 10:01:04.644682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.331 [2024-11-20 10:01:04.644684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.331 [2024-11-20 10:01:04.644696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.331 [2024-11-20 10:01:04.644702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.332 [2024-11-20 10:01:04.644708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.332 [2024-11-20 10:01:04.644717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.332 [2024-11-20 10:01:04.644824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.332 [2024-11-20 10:01:04.644830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.332 [2024-11-20 10:01:04.644833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.644836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.332 [2024-11-20 10:01:04.644844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.644847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.644852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.332 [2024-11-20 10:01:04.644858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.332 [2024-11-20 10:01:04.644867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.332 [2024-11-20 10:01:04.644977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.332 [2024-11-20 10:01:04.644983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.332 [2024-11-20 10:01:04.644986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.644989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.332 [2024-11-20 10:01:04.644997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.645000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.645003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.332 [2024-11-20 10:01:04.645009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.332 [2024-11-20 10:01:04.645018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.332 [2024-11-20 10:01:04.645095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.332 [2024-11-20 10:01:04.645101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.332 [2024-11-20 10:01:04.645104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.645107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.332 [2024-11-20 10:01:04.645116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.645119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.645122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.332 [2024-11-20 10:01:04.645128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.332 [2024-11-20 10:01:04.645137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.332 [2024-11-20 10:01:04.649209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.332 [2024-11-20 10:01:04.649220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.332 [2024-11-20 10:01:04.649224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.649227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.332 [2024-11-20 10:01:04.649238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.649242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.649245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74d690) 00:22:31.332 [2024-11-20 10:01:04.649252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.332 [2024-11-20 10:01:04.649265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7af580, cid 3, qid 0 00:22:31.332 [2024-11-20 10:01:04.649448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:31.332 [2024-11-20 10:01:04.649453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:31.332 [2024-11-20 10:01:04.649456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:31.332 [2024-11-20 10:01:04.649460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7af580) on tqpair=0x74d690 00:22:31.332 [2024-11-20 10:01:04.649467] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:31.332 0% 00:22:31.332 Data Units Read: 0 00:22:31.332 Data Units Written: 0 00:22:31.332 Host Read Commands: 0 00:22:31.332 Host Write Commands: 0 00:22:31.332 Controller Busy Time: 0 minutes 00:22:31.332 Power Cycles: 0 00:22:31.332 Power On Hours: 0 hours 00:22:31.332 Unsafe Shutdowns: 0 00:22:31.332 Unrecoverable Media Errors: 0 00:22:31.332 Lifetime Error Log Entries: 0 00:22:31.332 Warning Temperature Time: 0 minutes 00:22:31.332 Critical Temperature Time: 0 minutes 00:22:31.332 00:22:31.332 Number of Queues 00:22:31.332 ================ 00:22:31.332 Number of I/O Submission Queues: 127 00:22:31.332 Number of I/O Completion Queues: 127 00:22:31.332 00:22:31.332 Active Namespaces 00:22:31.332 ================= 00:22:31.332 Namespace ID:1 00:22:31.332 Error Recovery Timeout: Unlimited 00:22:31.332 Command Set Identifier: NVM (00h) 00:22:31.332 Deallocate: Supported 00:22:31.332 Deallocated/Unwritten Error: Not Supported 00:22:31.332 Deallocated Read Value: Unknown 00:22:31.332 Deallocate in Write Zeroes: Not Supported 00:22:31.332 Deallocated Guard Field: 0xFFFF 00:22:31.332 Flush: Supported 00:22:31.332 Reservation: Supported 00:22:31.332 Namespace Sharing Capabilities: Multiple Controllers 00:22:31.332 Size (in LBAs): 131072 (0GiB) 00:22:31.332 Capacity (in LBAs): 131072 (0GiB) 00:22:31.332 Utilization (in LBAs): 131072 (0GiB) 00:22:31.332 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:31.332 EUI64: ABCDEF0123456789 00:22:31.332 UUID: dbaa3ad5-f697-48af-b685-f136b2413f34 00:22:31.332 Thin Provisioning: Not Supported 00:22:31.332 Per-NS Atomic Units: Yes 00:22:31.332 Atomic Boundary Size (Normal): 0 00:22:31.332 Atomic Boundary Size (PFail): 0 00:22:31.332 Atomic Boundary Offset: 0 00:22:31.332 Maximum Single Source Range Length: 65535 00:22:31.332 Maximum Copy Length: 65535 00:22:31.332 Maximum Source Range Count: 1 00:22:31.332 NGUID/EUI64 Never Reused: No 00:22:31.332 Namespace Write Protected: No 00:22:31.332 Number of LBA Formats: 1 00:22:31.332 Current LBA Format: LBA Format #00 00:22:31.332 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:31.332 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.332 rmmod nvme_tcp 00:22:31.332 rmmod nvme_fabrics 00:22:31.332 rmmod nvme_keyring 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2736555 ']' 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2736555 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2736555 ']' 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2736555 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2736555 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2736555' 00:22:31.332 killing process with pid 2736555 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2736555 00:22:31.332 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2736555 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.592 10:01:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.497 00:22:33.497 real 0m9.310s 00:22:33.497 user 0m5.362s 00:22:33.497 sys 0m4.838s 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.497 ************************************ 00:22:33.497 END TEST nvmf_identify 00:22:33.497 ************************************ 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.497 10:01:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.756 ************************************ 00:22:33.756 START TEST nvmf_perf 00:22:33.756 ************************************ 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:33.756 * Looking for test storage... 00:22:33.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.756 --rc genhtml_branch_coverage=1 00:22:33.756 --rc genhtml_function_coverage=1 00:22:33.756 --rc genhtml_legend=1 00:22:33.756 --rc geninfo_all_blocks=1 00:22:33.756 --rc geninfo_unexecuted_blocks=1 00:22:33.756 00:22:33.756 ' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.756 --rc genhtml_branch_coverage=1 00:22:33.756 --rc genhtml_function_coverage=1 00:22:33.756 --rc genhtml_legend=1 00:22:33.756 --rc geninfo_all_blocks=1 00:22:33.756 --rc geninfo_unexecuted_blocks=1 00:22:33.756 00:22:33.756 ' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.756 --rc genhtml_branch_coverage=1 00:22:33.756 --rc genhtml_function_coverage=1 00:22:33.756 --rc genhtml_legend=1 00:22:33.756 --rc geninfo_all_blocks=1 00:22:33.756 --rc geninfo_unexecuted_blocks=1 00:22:33.756 00:22:33.756 ' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.756 --rc genhtml_branch_coverage=1 00:22:33.756 --rc genhtml_function_coverage=1 00:22:33.756 --rc genhtml_legend=1 00:22:33.756 --rc geninfo_all_blocks=1 00:22:33.756 --rc geninfo_unexecuted_blocks=1 00:22:33.756 00:22:33.756 ' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.756 10:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.321 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.321 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.322 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.322 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.322 10:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:22:40.322 00:22:40.322 --- 10.0.0.2 ping statistics --- 00:22:40.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.322 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:22:40.322 00:22:40.322 --- 10.0.0.1 ping statistics --- 00:22:40.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.322 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.322 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2740319 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2740319 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2740319 ']' 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.323 10:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.323 [2024-11-20 10:01:13.288857] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:22:40.323 [2024-11-20 10:01:13.288899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.323 [2024-11-20 10:01:13.365107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.323 [2024-11-20 10:01:13.408473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.323 [2024-11-20 10:01:13.408510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.323 [2024-11-20 10:01:13.408517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.323 [2024-11-20 10:01:13.408523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.323 [2024-11-20 10:01:13.408528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.323 [2024-11-20 10:01:13.410097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.323 [2024-11-20 10:01:13.410232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.323 [2024-11-20 10:01:13.410344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.323 [2024-11-20 10:01:13.410345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:40.581 10:01:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:43.867 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:43.867 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:43.867 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:43.867 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:44.126 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:44.126 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:44.126 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:44.126 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:44.126 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.384 [2024-11-20 10:01:17.802544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.384 10:01:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:44.643 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:44.643 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.901 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:44.901 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:44.901 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.159 [2024-11-20 10:01:18.617541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.159 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.417 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:45.417 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:45.417 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:45.417 10:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:46.791 Initializing NVMe Controllers 00:22:46.792 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:46.792 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:46.792 Initialization complete. Launching workers. 00:22:46.792 ======================================================== 00:22:46.792 Latency(us) 00:22:46.792 Device Information : IOPS MiB/s Average min max 00:22:46.792 PCIE (0000:5e:00.0) NSID 1 from core 0: 98370.70 384.26 324.68 34.98 7202.56 00:22:46.792 ======================================================== 00:22:46.792 Total : 98370.70 384.26 324.68 34.98 7202.56 00:22:46.792 00:22:46.792 10:01:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:48.170 Initializing NVMe Controllers 00:22:48.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:48.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:48.171 Initialization complete. Launching workers. 00:22:48.171 ======================================================== 00:22:48.171 Latency(us) 00:22:48.171 Device Information : IOPS MiB/s Average min max 00:22:48.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.96 0.27 14753.36 105.67 45871.72 00:22:48.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.96 0.27 14951.63 7963.37 50876.30 00:22:48.171 ======================================================== 00:22:48.171 Total : 137.93 0.54 14852.50 105.67 50876.30 00:22:48.171 00:22:48.171 10:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:49.549 Initializing NVMe Controllers 00:22:49.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:49.549 Initialization complete. Launching workers. 00:22:49.549 ======================================================== 00:22:49.549 Latency(us) 00:22:49.549 Device Information : IOPS MiB/s Average min max 00:22:49.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11237.00 43.89 2854.13 373.09 6153.16 00:22:49.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3820.00 14.92 8505.01 4400.44 47850.93 00:22:49.549 ======================================================== 00:22:49.549 Total : 15057.00 58.82 4287.77 373.09 47850.93 00:22:49.549 00:22:49.549 10:01:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:49.549 10:01:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:49.549 10:01:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:52.085 Initializing NVMe Controllers 00:22:52.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.085 Controller IO queue size 128, less than required. 00:22:52.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.085 Controller IO queue size 128, less than required. 00:22:52.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:52.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:52.085 Initialization complete. Launching workers. 00:22:52.085 ======================================================== 00:22:52.085 Latency(us) 00:22:52.085 Device Information : IOPS MiB/s Average min max 00:22:52.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1829.56 457.39 70720.32 52329.20 135568.66 00:22:52.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.86 149.46 223627.62 85832.67 356666.77 00:22:52.085 ======================================================== 00:22:52.085 Total : 2427.42 606.86 108380.35 52329.20 356666.77 00:22:52.085 00:22:52.085 10:01:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:52.085 No valid NVMe controllers or AIO or URING devices found 00:22:52.085 Initializing NVMe Controllers 00:22:52.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.085 Controller IO queue size 128, less than required. 00:22:52.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.085 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:52.085 Controller IO queue size 128, less than required. 00:22:52.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.085 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:52.085 WARNING: Some requested NVMe devices were skipped 00:22:52.085 10:01:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:54.621 Initializing NVMe Controllers 00:22:54.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.621 Controller IO queue size 128, less than required. 00:22:54.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:54.621 Controller IO queue size 128, less than required. 00:22:54.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:54.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:54.621 Initialization complete. Launching workers. 00:22:54.621 00:22:54.621 ==================== 00:22:54.621 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:54.621 TCP transport: 00:22:54.621 polls: 10897 00:22:54.621 idle_polls: 7371 00:22:54.621 sock_completions: 3526 00:22:54.621 nvme_completions: 6393 00:22:54.621 submitted_requests: 9558 00:22:54.621 queued_requests: 1 00:22:54.621 00:22:54.621 ==================== 00:22:54.621 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:54.621 TCP transport: 00:22:54.621 polls: 11273 00:22:54.621 idle_polls: 7934 00:22:54.621 sock_completions: 3339 00:22:54.621 nvme_completions: 6535 00:22:54.621 submitted_requests: 9856 00:22:54.621 queued_requests: 1 00:22:54.621 ======================================================== 00:22:54.621 Latency(us) 00:22:54.621 Device Information : IOPS MiB/s Average min max 00:22:54.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.30 399.07 81890.00 53068.68 127447.04 00:22:54.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1631.76 407.94 79352.58 42154.26 128385.10 00:22:54.621 ======================================================== 00:22:54.621 Total : 3228.06 807.01 80607.36 42154.26 128385.10 00:22:54.621 00:22:54.621 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:54.621 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:54.880 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.881 rmmod nvme_tcp 00:22:54.881 rmmod nvme_fabrics 00:22:54.881 rmmod nvme_keyring 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2740319 ']' 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2740319 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2740319 ']' 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2740319 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2740319 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2740319' 00:22:54.881 killing process with pid 2740319 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2740319 00:22:54.881 10:01:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2740319 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.412 10:01:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.319 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.319 00:22:59.319 real 0m25.431s 00:22:59.319 user 1m7.922s 00:22:59.319 sys 0m8.291s 00:22:59.319 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.319 10:01:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.319 ************************************ 00:22:59.319 END TEST nvmf_perf 00:22:59.319 ************************************ 00:22:59.319 10:01:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.320 ************************************ 00:22:59.320 START TEST nvmf_fio_host 00:22:59.320 ************************************ 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:59.320 * Looking for test storage... 00:22:59.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.320 --rc genhtml_branch_coverage=1 00:22:59.320 --rc genhtml_function_coverage=1 00:22:59.320 --rc genhtml_legend=1 00:22:59.320 --rc geninfo_all_blocks=1 00:22:59.320 --rc geninfo_unexecuted_blocks=1 00:22:59.320 00:22:59.320 ' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.320 --rc genhtml_branch_coverage=1 00:22:59.320 --rc genhtml_function_coverage=1 00:22:59.320 --rc genhtml_legend=1 00:22:59.320 --rc geninfo_all_blocks=1 00:22:59.320 --rc geninfo_unexecuted_blocks=1 00:22:59.320 00:22:59.320 ' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.320 --rc genhtml_branch_coverage=1 00:22:59.320 --rc genhtml_function_coverage=1 00:22:59.320 --rc genhtml_legend=1 00:22:59.320 --rc geninfo_all_blocks=1 00:22:59.320 --rc geninfo_unexecuted_blocks=1 00:22:59.320 00:22:59.320 ' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.320 --rc genhtml_branch_coverage=1 00:22:59.320 --rc genhtml_function_coverage=1 00:22:59.320 --rc genhtml_legend=1 00:22:59.320 --rc geninfo_all_blocks=1 00:22:59.320 --rc geninfo_unexecuted_blocks=1 00:22:59.320 00:22:59.320 ' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.320 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.321 10:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:05.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:05.904 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:05.904 Found net devices under 0000:86:00.0: cvl_0_0 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:05.904 Found net devices under 0000:86:00.1: cvl_0_1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:23:05.904 00:23:05.904 --- 10.0.0.2 ping statistics --- 00:23:05.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.904 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:05.904 00:23:05.904 --- 10.0.0.1 ping statistics --- 00:23:05.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.904 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.904 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2746651 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2746651 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2746651 ']' 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.905 10:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.905 [2024-11-20 10:01:38.861346] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:23:05.905 [2024-11-20 10:01:38.861389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.905 [2024-11-20 10:01:38.943758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.905 [2024-11-20 10:01:38.985793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.905 [2024-11-20 10:01:38.985829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.905 [2024-11-20 10:01:38.985836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.905 [2024-11-20 10:01:38.985842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.905 [2024-11-20 10:01:38.985847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.905 [2024-11-20 10:01:38.987416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.905 [2024-11-20 10:01:38.987526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.905 [2024-11-20 10:01:38.987554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.905 [2024-11-20 10:01:38.987556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.164 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.164 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:06.164 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:06.423 [2024-11-20 10:01:39.865901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.423 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:06.423 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.423 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.423 10:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:06.682 Malloc1 00:23:06.682 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.941 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:07.200 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.200 [2024-11-20 10:01:40.743579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.200 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:07.458 10:01:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:07.458 10:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:07.458 10:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:07.458 10:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:07.458 10:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:07.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:07.717 fio-3.35 00:23:07.717 Starting 1 thread 00:23:10.258 00:23:10.258 test: (groupid=0, jobs=1): err= 0: pid=2747144: Wed Nov 20 10:01:43 2024 00:23:10.258 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.3MiB/2005msec) 00:23:10.258 slat (nsec): min=1532, max=238578, avg=1717.22, stdev=2186.98 00:23:10.258 clat (usec): min=3118, max=10368, avg=5929.80, stdev=454.73 00:23:10.258 lat (usec): min=3151, max=10370, avg=5931.52, stdev=454.68 00:23:10.258 clat percentiles (usec): 00:23:10.258 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:10.258 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:10.258 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:23:10.258 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 9241], 99.95th=[ 9634], 00:23:10.258 | 99.99th=[10290] 00:23:10.258 bw ( KiB/s): min=46560, max=48136, per=99.94%, avg=47618.00, stdev=741.32, samples=4 00:23:10.258 iops : min=11640, max=12034, avg=11904.50, stdev=185.33, samples=4 00:23:10.258 write: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec); 0 zone resets 00:23:10.258 slat (nsec): min=1565, max=225991, avg=1776.19, stdev=1652.84 00:23:10.258 clat (usec): min=2417, max=8597, avg=4792.79, stdev=367.62 00:23:10.258 lat (usec): min=2433, max=8603, avg=4794.56, stdev=367.70 00:23:10.258 clat percentiles (usec): 00:23:10.258 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:10.258 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:10.258 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:10.258 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 8094], 00:23:10.258 | 99.99th=[ 8586] 00:23:10.258 bw ( KiB/s): min=47112, max=47936, per=100.00%, avg=47442.00, stdev=397.51, samples=4 00:23:10.258 iops : min=11778, max=11984, avg=11860.50, stdev=99.38, samples=4 00:23:10.258 lat (msec) : 4=0.74%, 10=99.25%, 20=0.01% 00:23:10.258 cpu : usr=75.20%, sys=23.75%, ctx=118, majf=0, minf=3 00:23:10.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:10.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:10.258 issued rwts: total=23882,23775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:10.258 00:23:10.258 Run status group 0 (all jobs): 00:23:10.258 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.8MB), run=2005-2005msec 00:23:10.258 WRITE: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:23:10.258 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:10.259 10:01:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:10.525 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:10.525 fio-3.35 00:23:10.525 Starting 1 thread 00:23:13.062 00:23:13.062 test: (groupid=0, jobs=1): err= 0: pid=2747616: Wed Nov 20 10:01:46 2024 00:23:13.062 read: IOPS=11.0k, BW=173MiB/s (181MB/s)(346MiB/2005msec) 00:23:13.062 slat (nsec): min=2481, max=93116, avg=2855.27, stdev=1326.66 00:23:13.062 clat (usec): min=1188, max=14166, avg=6682.33, stdev=1598.89 00:23:13.062 lat (usec): min=1190, max=14180, avg=6685.18, stdev=1599.11 00:23:13.062 clat percentiles (usec): 00:23:13.062 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:23:13.062 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7177], 00:23:13.062 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9372], 00:23:13.062 | 99.00th=[10552], 99.50th=[11731], 99.90th=[13435], 99.95th=[13698], 00:23:13.062 | 99.99th=[14091] 00:23:13.062 bw ( KiB/s): min=85344, max=97564, per=50.52%, avg=89239.00, stdev=5639.74, samples=4 00:23:13.062 iops : min= 5334, max= 6097, avg=5577.25, stdev=352.11, samples=4 00:23:13.062 write: IOPS=6421, BW=100MiB/s (105MB/s)(182MiB/1817msec); 0 zone resets 00:23:13.062 slat (usec): min=29, max=388, avg=31.82, stdev= 7.80 00:23:13.062 clat (usec): min=2880, max=15095, avg=8585.34, stdev=1582.33 00:23:13.062 lat (usec): min=2909, max=15212, avg=8617.16, stdev=1583.98 00:23:13.062 clat percentiles (usec): 00:23:13.062 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:23:13.062 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:13.062 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:23:13.062 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14746], 99.95th=[14877], 00:23:13.062 | 99.99th=[15008] 00:23:13.062 bw ( KiB/s): min=88096, max=101716, per=90.48%, avg=92957.00, stdev=5998.74, samples=4 00:23:13.062 iops : min= 5506, max= 6357, avg=5809.75, stdev=374.80, samples=4 00:23:13.062 lat (msec) : 2=0.02%, 4=2.01%, 10=90.23%, 20=7.74% 00:23:13.062 cpu : usr=86.68%, sys=12.67%, ctx=42, majf=0, minf=3 00:23:13.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:13.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:13.062 issued rwts: total=22137,11667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:13.062 00:23:13.062 Run status group 0 (all jobs): 00:23:13.062 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2005-2005msec 00:23:13.062 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=182MiB (191MB), run=1817-1817msec 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.062 rmmod nvme_tcp 00:23:13.062 rmmod nvme_fabrics 00:23:13.062 rmmod nvme_keyring 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.062 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2746651 ']' 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2746651 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2746651 ']' 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2746651 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2746651 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2746651' 00:23:13.063 killing process with pid 2746651 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2746651 00:23:13.063 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2746651 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.322 10:01:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.228 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.228 00:23:15.228 real 0m16.195s 00:23:15.228 user 0m47.464s 00:23:15.228 sys 0m6.458s 00:23:15.228 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.228 10:01:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.228 ************************************ 00:23:15.228 END TEST nvmf_fio_host 00:23:15.228 ************************************ 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.488 ************************************ 00:23:15.488 START TEST nvmf_failover 00:23:15.488 ************************************ 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:15.488 * Looking for test storage... 00:23:15.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.488 10:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.488 --rc genhtml_branch_coverage=1 00:23:15.488 --rc genhtml_function_coverage=1 00:23:15.488 --rc genhtml_legend=1 00:23:15.488 --rc geninfo_all_blocks=1 00:23:15.488 --rc geninfo_unexecuted_blocks=1 00:23:15.488 00:23:15.488 ' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.488 --rc genhtml_branch_coverage=1 00:23:15.488 --rc genhtml_function_coverage=1 00:23:15.488 --rc genhtml_legend=1 00:23:15.488 --rc geninfo_all_blocks=1 00:23:15.488 --rc geninfo_unexecuted_blocks=1 00:23:15.488 00:23:15.488 ' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.488 --rc genhtml_branch_coverage=1 00:23:15.488 --rc genhtml_function_coverage=1 00:23:15.488 --rc genhtml_legend=1 00:23:15.488 --rc geninfo_all_blocks=1 00:23:15.488 --rc geninfo_unexecuted_blocks=1 00:23:15.488 00:23:15.488 ' 00:23:15.488 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.488 --rc genhtml_branch_coverage=1 00:23:15.488 --rc genhtml_function_coverage=1 00:23:15.488 --rc genhtml_legend=1 00:23:15.488 --rc geninfo_all_blocks=1 00:23:15.488 --rc geninfo_unexecuted_blocks=1 00:23:15.488 00:23:15.488 ' 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:15.489 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.753 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.754 10:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:22.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:22.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:22.334 Found net devices under 0000:86:00.0: cvl_0_0 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:22.334 Found net devices under 0000:86:00.1: cvl_0_1 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:22.334 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:23:22.335 00:23:22.335 --- 10.0.0.2 ping statistics --- 00:23:22.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.335 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:22.335 10:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:22.335 00:23:22.335 --- 10.0.0.1 ping statistics --- 00:23:22.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.335 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2751581 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2751581 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2751581 ']' 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.335 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.335 [2024-11-20 10:01:55.104924] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:23:22.335 [2024-11-20 10:01:55.104967] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.335 [2024-11-20 10:01:55.182617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:22.335 [2024-11-20 10:01:55.224473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.335 [2024-11-20 10:01:55.224509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.335 [2024-11-20 10:01:55.224516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.335 [2024-11-20 10:01:55.224522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.335 [2024-11-20 10:01:55.224527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.335 [2024-11-20 10:01:55.225977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.335 [2024-11-20 10:01:55.226085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.335 [2024-11-20 10:01:55.226086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.594 10:01:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:22.594 [2024-11-20 10:01:56.152846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.853 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:22.853 Malloc0 00:23:22.853 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.112 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.371 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.371 [2024-11-20 10:01:56.939632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.629 10:01:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.629 [2024-11-20 10:01:57.144262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.629 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:23.889 [2024-11-20 10:01:57.336872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2752055 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2752055 /var/tmp/bdevperf.sock 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2752055 ']' 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.889 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:24.148 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.148 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:24.148 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:24.406 NVMe0n1 00:23:24.406 10:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:24.974 00:23:24.974 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2752077 00:23:24.974 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:24.974 10:01:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.932 10:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.932 [2024-11-20 10:01:59.484347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:25.932 [2024-11-20 10:01:59.484445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2a2d0 is same with the state(6) to be set 00:23:26.231 10:01:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:29.571 10:02:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:29.571 00:23:29.571 10:02:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.829 [2024-11-20 10:02:03.156964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b060 is same with the state(6) to be set 00:23:29.829 [2024-11-20 10:02:03.157002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b060 is same with the state(6) to be set 00:23:29.829 10:02:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:33.117 10:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.117 [2024-11-20 10:02:06.367116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.117 10:02:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:34.056 10:02:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:34.056 [2024-11-20 10:02:07.582736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.582999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.583005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.583010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.583016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.583021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.056 [2024-11-20 10:02:07.583027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 [2024-11-20 10:02:07.583238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2be30 is same with the state(6) to be set 00:23:34.057 10:02:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2752077 00:23:40.635 { 00:23:40.635 "results": [ 00:23:40.635 { 00:23:40.635 "job": "NVMe0n1", 00:23:40.635 "core_mask": "0x1", 00:23:40.635 "workload": "verify", 00:23:40.635 "status": "finished", 00:23:40.635 "verify_range": { 00:23:40.635 "start": 0, 00:23:40.635 "length": 16384 00:23:40.635 }, 00:23:40.635 "queue_depth": 128, 00:23:40.635 "io_size": 4096, 00:23:40.635 "runtime": 15.001414, 00:23:40.635 "iops": 11179.212839536327, 00:23:40.635 "mibps": 43.66880015443878, 00:23:40.635 "io_failed": 10501, 00:23:40.635 "io_timeout": 0, 00:23:40.635 "avg_latency_us": 10753.740576035358, 00:23:40.635 "min_latency_us": 557.8361904761905, 00:23:40.635 "max_latency_us": 28211.687619047618 00:23:40.635 } 00:23:40.635 ], 00:23:40.635 "core_count": 1 00:23:40.635 } 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2752055 ']' 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2752055' 00:23:40.635 killing process with pid 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2752055 00:23:40.635 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:40.635 [2024-11-20 10:01:57.413010] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:23:40.635 [2024-11-20 10:01:57.413064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752055 ] 00:23:40.635 [2024-11-20 10:01:57.486294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.635 [2024-11-20 10:01:57.528755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.635 Running I/O for 15 seconds... 00:23:40.635 11516.00 IOPS, 44.98 MiB/s [2024-11-20T09:02:14.217Z] [2024-11-20 10:01:59.486616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.635 [2024-11-20 10:01:59.486849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.635 [2024-11-20 10:01:59.486954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.635 [2024-11-20 10:01:59.486962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.486969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.486977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.486992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.486998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.636 [2024-11-20 10:01:59.487597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.636 [2024-11-20 10:01:59.487605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.637 [2024-11-20 10:01:59.487621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.487978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.487984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.487991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.487998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100968 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100976 len:8 PRP1 0x0 PRP2 0x0 00:23:40.637 [2024-11-20 10:01:59.488234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.637 [2024-11-20 10:01:59.488241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.637 [2024-11-20 10:01:59.488246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.637 [2024-11-20 10:01:59.488253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100984 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100992 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101000 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101008 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101016 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101024 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101032 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101040 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101048 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101056 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101064 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.638 [2024-11-20 10:01:59.488878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.638 [2024-11-20 10:01:59.488884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:23:40.638 [2024-11-20 10:01:59.488891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.638 [2024-11-20 10:01:59.488898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.488903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.488911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.488918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.488925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.488930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.488937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.488944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.488951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.488956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.639 [2024-11-20 10:01:59.500980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:23:40.639 [2024-11-20 10:01:59.500985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.639 [2024-11-20 10:01:59.500992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.639 [2024-11-20 10:01:59.500997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.640 [2024-11-20 10:01:59.501003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:23:40.640 [2024-11-20 10:01:59.501009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.640 [2024-11-20 10:01:59.501021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.640 [2024-11-20 10:01:59.501026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:23:40.640 [2024-11-20 10:01:59.501032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501076] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:40.640 [2024-11-20 10:01:59.501099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.640 [2024-11-20 10:01:59.501106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.640 [2024-11-20 10:01:59.501121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.640 [2024-11-20 10:01:59.501134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.640 [2024-11-20 10:01:59.501147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:01:59.501155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:40.640 [2024-11-20 10:01:59.501197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4340 (9): Bad file descriptor 00:23:40.640 [2024-11-20 10:01:59.504915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:40.640 [2024-11-20 10:01:59.533324] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:40.640 11167.00 IOPS, 43.62 MiB/s [2024-11-20T09:02:14.222Z] 11237.33 IOPS, 43.90 MiB/s [2024-11-20T09:02:14.222Z] 11301.50 IOPS, 44.15 MiB/s [2024-11-20T09:02:14.222Z] [2024-11-20 10:02:03.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.640 [2024-11-20 10:02:03.158969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.640 [2024-11-20 10:02:03.158975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.158983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.158990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.158998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.641 [2024-11-20 10:02:03.159526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.641 [2024-11-20 10:02:03.159539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.641 [2024-11-20 10:02:03.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.642 [2024-11-20 10:02:03.159625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49736 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.642 [2024-11-20 10:02:03.159694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.642 [2024-11-20 10:02:03.159708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.642 [2024-11-20 10:02:03.159722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.642 [2024-11-20 10:02:03.159735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4340 is same with the state(6) to be set 00:23:40.642 [2024-11-20 10:02:03.159879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.159887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49744 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.159912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.159935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49760 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.159960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.159979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.159984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.159989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49776 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.159995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49784 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49792 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49800 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49808 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49816 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49824 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49832 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49840 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49848 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49856 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 PRP1 0x0 PRP2 0x0 00:23:40.642 [2024-11-20 10:02:03.160261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.642 [2024-11-20 10:02:03.160267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.642 [2024-11-20 10:02:03.160272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.642 [2024-11-20 10:02:03.160277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49872 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49880 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49888 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49912 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49920 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49928 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49936 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49000 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49008 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49016 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49024 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49032 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49040 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49048 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49056 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49064 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49072 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49080 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49088 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.160761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.160767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49096 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.160779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.171751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.171761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49104 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.171769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.643 [2024-11-20 10:02:03.171777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.643 [2024-11-20 10:02:03.171782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.643 [2024-11-20 10:02:03.171787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49112 len:8 PRP1 0x0 PRP2 0x0 00:23:40.643 [2024-11-20 10:02:03.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49120 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49128 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49136 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49144 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49160 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49168 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49176 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.171980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.171986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.171991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.171996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49184 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49944 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49952 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49960 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49192 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49200 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49208 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49216 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49224 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49232 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49240 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.644 [2024-11-20 10:02:03.172255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49248 len:8 PRP1 0x0 PRP2 0x0 00:23:40.644 [2024-11-20 10:02:03.172261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.644 [2024-11-20 10:02:03.172268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.644 [2024-11-20 10:02:03.172273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49256 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49264 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49272 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49280 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49288 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49296 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49320 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49344 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49352 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49400 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49424 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:23:40.645 [2024-11-20 10:02:03.172790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.645 [2024-11-20 10:02:03.172796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.645 [2024-11-20 10:02:03.172801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.645 [2024-11-20 10:02:03.172807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49440 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49448 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49456 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49464 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49472 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49480 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49488 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49496 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.172977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.172983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.172988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.172994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49504 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49512 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49520 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49528 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49544 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.173168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.173177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.173184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.173191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.179960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.179974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.179982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.179991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49560 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49568 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49576 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49584 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49592 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49600 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49608 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49616 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.646 [2024-11-20 10:02:03.180234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.646 [2024-11-20 10:02:03.180240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.646 [2024-11-20 10:02:03.180247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49624 len:8 PRP1 0x0 PRP2 0x0 00:23:40.646 [2024-11-20 10:02:03.180255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49632 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49640 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49648 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49656 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49664 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49672 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48944 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48952 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48960 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48968 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48976 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48984 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48992 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49680 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49688 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49696 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49728 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.647 [2024-11-20 10:02:03.180886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.647 [2024-11-20 10:02:03.180892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49736 len:8 PRP1 0x0 PRP2 0x0 00:23:40.647 [2024-11-20 10:02:03.180901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.647 [2024-11-20 10:02:03.180950] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:40.647 [2024-11-20 10:02:03.180960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:40.647 [2024-11-20 10:02:03.180998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4340 (9): Bad file descriptor 00:23:40.647 [2024-11-20 10:02:03.185431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:40.647 [2024-11-20 10:02:03.215471] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:40.647 11186.40 IOPS, 43.70 MiB/s [2024-11-20T09:02:14.229Z] 11215.00 IOPS, 43.81 MiB/s [2024-11-20T09:02:14.229Z] 11256.57 IOPS, 43.97 MiB/s [2024-11-20T09:02:14.229Z] 11271.25 IOPS, 44.03 MiB/s [2024-11-20T09:02:14.229Z] 11283.11 IOPS, 44.07 MiB/s [2024-11-20T09:02:14.229Z] [2024-11-20 10:02:07.584531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.584990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.584997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.585011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.585025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.585039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.648 [2024-11-20 10:02:07.585076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.648 [2024-11-20 10:02:07.585084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.649 [2024-11-20 10:02:07.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.649 [2024-11-20 10:02:07.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.649 [2024-11-20 10:02:07.585663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.585987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.585993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.650 [2024-11-20 10:02:07.586241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.650 [2024-11-20 10:02:07.586247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.651 [2024-11-20 10:02:07.586421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.651 [2024-11-20 10:02:07.586447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.651 [2024-11-20 10:02:07.586453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:23:40.651 [2024-11-20 10:02:07.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586504] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:40.651 [2024-11-20 10:02:07.586524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.651 [2024-11-20 10:02:07.586531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.651 [2024-11-20 10:02:07.586545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.651 [2024-11-20 10:02:07.586558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.651 [2024-11-20 10:02:07.586571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.651 [2024-11-20 10:02:07.586577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:40.651 [2024-11-20 10:02:07.586599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4340 (9): Bad file descriptor 00:23:40.651 [2024-11-20 10:02:07.589332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:40.651 [2024-11-20 10:02:07.743902] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:40.651 11109.20 IOPS, 43.40 MiB/s [2024-11-20T09:02:14.233Z] 11130.36 IOPS, 43.48 MiB/s [2024-11-20T09:02:14.233Z] 11139.33 IOPS, 43.51 MiB/s [2024-11-20T09:02:14.233Z] 11159.92 IOPS, 43.59 MiB/s [2024-11-20T09:02:14.233Z] 11175.86 IOPS, 43.66 MiB/s 00:23:40.651 Latency(us) 00:23:40.651 [2024-11-20T09:02:14.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:40.651 Verification LBA range: start 0x0 length 0x4000 00:23:40.651 NVMe0n1 : 15.00 11179.21 43.67 700.00 0.00 10753.74 557.84 28211.69 00:23:40.651 [2024-11-20T09:02:14.233Z] =================================================================================================================== 00:23:40.651 [2024-11-20T09:02:14.233Z] Total : 11179.21 43.67 700.00 0.00 10753.74 557.84 28211.69 00:23:40.651 Received shutdown signal, test time was about 15.000000 seconds 00:23:40.651 00:23:40.651 Latency(us) 00:23:40.651 [2024-11-20T09:02:14.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.651 [2024-11-20T09:02:14.233Z] =================================================================================================================== 00:23:40.651 [2024-11-20T09:02:14.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2754603 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2754603 /var/tmp/bdevperf.sock 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2754603 ']' 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:40.651 10:02:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.651 [2024-11-20 10:02:14.112500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.651 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:40.910 [2024-11-20 10:02:14.321072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:40.910 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.477 NVMe0n1 00:23:41.477 10:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.477 00:23:41.736 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.995 00:23:41.995 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:41.995 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:41.995 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.254 10:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:45.543 10:02:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.543 10:02:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:45.543 10:02:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.543 10:02:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2755522 00:23:45.543 10:02:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2755522 00:23:46.490 { 00:23:46.490 "results": [ 00:23:46.490 { 00:23:46.490 "job": "NVMe0n1", 00:23:46.490 "core_mask": "0x1", 00:23:46.490 "workload": "verify", 00:23:46.490 "status": "finished", 00:23:46.490 "verify_range": { 00:23:46.490 "start": 0, 00:23:46.490 "length": 16384 00:23:46.490 }, 00:23:46.490 "queue_depth": 128, 00:23:46.490 "io_size": 4096, 00:23:46.490 "runtime": 1.011111, 00:23:46.490 "iops": 11196.594636988422, 00:23:46.490 "mibps": 43.736697800736025, 00:23:46.490 "io_failed": 0, 00:23:46.490 "io_timeout": 0, 00:23:46.490 "avg_latency_us": 11387.0112837079, 00:23:46.490 "min_latency_us": 2246.9485714285715, 00:23:46.490 "max_latency_us": 11047.497142857143 00:23:46.490 } 00:23:46.490 ], 00:23:46.490 "core_count": 1 00:23:46.490 } 00:23:46.490 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:46.490 [2024-11-20 10:02:13.731553] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:23:46.490 [2024-11-20 10:02:13.731606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754603 ] 00:23:46.490 [2024-11-20 10:02:13.804384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.490 [2024-11-20 10:02:13.841849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.490 [2024-11-20 10:02:15.702600] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:46.490 [2024-11-20 10:02:15.702649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.490 [2024-11-20 10:02:15.702661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.490 [2024-11-20 10:02:15.702670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.490 [2024-11-20 10:02:15.702677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.490 [2024-11-20 10:02:15.702684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.490 [2024-11-20 10:02:15.702691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.490 [2024-11-20 10:02:15.702698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.491 [2024-11-20 10:02:15.702705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.491 [2024-11-20 10:02:15.702711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:46.491 [2024-11-20 10:02:15.702738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:46.491 [2024-11-20 10:02:15.702752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7c340 (9): Bad file descriptor 00:23:46.491 [2024-11-20 10:02:15.748219] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:46.491 Running I/O for 1 seconds... 00:23:46.491 11164.00 IOPS, 43.61 MiB/s 00:23:46.491 Latency(us) 00:23:46.491 [2024-11-20T09:02:20.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.491 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:46.491 Verification LBA range: start 0x0 length 0x4000 00:23:46.491 NVMe0n1 : 1.01 11196.59 43.74 0.00 0.00 11387.01 2246.95 11047.50 00:23:46.491 [2024-11-20T09:02:20.073Z] =================================================================================================================== 00:23:46.491 [2024-11-20T09:02:20.073Z] Total : 11196.59 43.74 0.00 0.00 11387.01 2246.95 11047.50 00:23:46.491 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:46.491 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:46.749 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:47.008 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:47.008 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.321 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:47.321 10:02:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:50.606 10:02:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.606 10:02:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2754603 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2754603 ']' 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2754603 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2754603 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2754603' 00:23:50.606 killing process with pid 2754603 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2754603 00:23:50.606 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2754603 00:23:50.865 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:50.865 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.124 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:51.124 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.125 rmmod nvme_tcp 00:23:51.125 rmmod nvme_fabrics 00:23:51.125 rmmod nvme_keyring 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2751581 ']' 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2751581 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2751581 ']' 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2751581 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751581 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751581' 00:23:51.125 killing process with pid 2751581 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2751581 00:23:51.125 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2751581 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.384 10:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.289 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.289 00:23:53.289 real 0m37.960s 00:23:53.289 user 1m59.930s 00:23:53.289 sys 0m8.058s 00:23:53.289 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.289 10:02:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:53.289 ************************************ 00:23:53.289 END TEST nvmf_failover 00:23:53.289 ************************************ 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.549 ************************************ 00:23:53.549 START TEST nvmf_host_discovery 00:23:53.549 ************************************ 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:53.549 * Looking for test storage... 00:23:53.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.549 10:02:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.549 --rc genhtml_branch_coverage=1 00:23:53.549 --rc genhtml_function_coverage=1 00:23:53.549 --rc genhtml_legend=1 00:23:53.549 --rc geninfo_all_blocks=1 00:23:53.549 --rc geninfo_unexecuted_blocks=1 00:23:53.549 00:23:53.549 ' 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.549 --rc genhtml_branch_coverage=1 00:23:53.549 --rc genhtml_function_coverage=1 00:23:53.549 --rc genhtml_legend=1 00:23:53.549 --rc geninfo_all_blocks=1 00:23:53.549 --rc geninfo_unexecuted_blocks=1 00:23:53.549 00:23:53.549 ' 00:23:53.549 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.550 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.551 10:02:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:00.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:00.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.123 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:00.123 Found net devices under 0000:86:00.0: cvl_0_0 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:00.124 Found net devices under 0000:86:00.1: cvl_0_1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.124 10:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:24:00.124 00:24:00.124 --- 10.0.0.2 ping statistics --- 00:24:00.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.124 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:00.124 00:24:00.124 --- 10.0.0.1 ping statistics --- 00:24:00.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.124 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2759974 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2759974 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2759974 ']' 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 [2024-11-20 10:02:33.118333] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:00.124 [2024-11-20 10:02:33.118384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.124 [2024-11-20 10:02:33.182140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.124 [2024-11-20 10:02:33.223562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.124 [2024-11-20 10:02:33.223597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.124 [2024-11-20 10:02:33.223604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.124 [2024-11-20 10:02:33.223611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.124 [2024-11-20 10:02:33.223616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.124 [2024-11-20 10:02:33.224168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 [2024-11-20 10:02:33.367337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 [2024-11-20 10:02:33.379541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 null0 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.124 null1 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.124 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2759997 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2759997 /tmp/host.sock 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2759997 ']' 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:00.125 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.125 [2024-11-20 10:02:33.458007] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:00.125 [2024-11-20 10:02:33.458045] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759997 ] 00:24:00.125 [2024-11-20 10:02:33.532089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.125 [2024-11-20 10:02:33.572233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.125 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.385 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.644 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:00.644 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 [2024-11-20 10:02:33.973047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.645 10:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:00.645 10:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:01.213 [2024-11-20 10:02:34.695016] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:01.213 [2024-11-20 10:02:34.695037] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:01.213 [2024-11-20 10:02:34.695049] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:01.472 [2024-11-20 10:02:34.824444] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:01.472 [2024-11-20 10:02:34.925262] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:01.472 [2024-11-20 10:02:34.926058] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf14dd0:1 started. 00:24:01.472 [2024-11-20 10:02:34.927420] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:01.473 [2024-11-20 10:02:34.927436] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:01.473 [2024-11-20 10:02:34.975811] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf14dd0 was disconnected and freed. delete nvme_qpair. 00:24:01.731 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:01.732 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 [2024-11-20 10:02:35.535091] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf151a0:1 started. 00:24:01.991 [2024-11-20 10:02:35.545767] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf151a0 was disconnected and freed. delete nvme_qpair. 00:24:01.991 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.992 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:01.992 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:01.992 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:01.992 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:01.992 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.251 [2024-11-20 10:02:35.621516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.251 [2024-11-20 10:02:35.622435] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:02.251 [2024-11-20 10:02:35.622455] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.251 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:02.252 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.252 [2024-11-20 10:02:35.751290] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:02.252 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:02.252 10:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:02.510 [2024-11-20 10:02:36.054649] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:02.510 [2024-11-20 10:02:36.054684] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:02.510 [2024-11-20 10:02:36.054692] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:02.510 [2024-11-20 10:02:36.054697] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.449 [2024-11-20 10:02:36.857840] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:03.449 [2024-11-20 10:02:36.857861] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:03.449 [2024-11-20 10:02:36.859454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.449 [2024-11-20 10:02:36.859469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.449 [2024-11-20 10:02:36.859477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.449 [2024-11-20 10:02:36.859484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.449 [2024-11-20 10:02:36.859491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.449 [2024-11-20 10:02:36.859497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.449 [2024-11-20 10:02:36.859504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.449 [2024-11-20 10:02:36.859510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.449 [2024-11-20 10:02:36.859520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.449 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:03.449 [2024-11-20 10:02:36.869467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.449 [2024-11-20 10:02:36.879502] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.449 [2024-11-20 10:02:36.879515] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.449 [2024-11-20 10:02:36.879519] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.449 [2024-11-20 10:02:36.879524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.449 [2024-11-20 10:02:36.879540] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.449 [2024-11-20 10:02:36.879779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.449 [2024-11-20 10:02:36.879794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.449 [2024-11-20 10:02:36.879802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.449 [2024-11-20 10:02:36.879813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.879823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.450 [2024-11-20 10:02:36.879829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.450 [2024-11-20 10:02:36.879837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.450 [2024-11-20 10:02:36.879843] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.450 [2024-11-20 10:02:36.879848] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.450 [2024-11-20 10:02:36.879853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.450 [2024-11-20 10:02:36.889571] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.450 [2024-11-20 10:02:36.889584] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.450 [2024-11-20 10:02:36.889588] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.889592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.450 [2024-11-20 10:02:36.889604] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.889868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.450 [2024-11-20 10:02:36.889880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.450 [2024-11-20 10:02:36.889887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.450 [2024-11-20 10:02:36.889897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.889907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.450 [2024-11-20 10:02:36.889913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.450 [2024-11-20 10:02:36.889919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.450 [2024-11-20 10:02:36.889925] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.450 [2024-11-20 10:02:36.889929] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.450 [2024-11-20 10:02:36.889933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.450 [2024-11-20 10:02:36.899635] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.450 [2024-11-20 10:02:36.899645] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.450 [2024-11-20 10:02:36.899649] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.899653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.450 [2024-11-20 10:02:36.899665] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.899835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.450 [2024-11-20 10:02:36.899847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.450 [2024-11-20 10:02:36.899855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.450 [2024-11-20 10:02:36.899867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.899878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.450 [2024-11-20 10:02:36.899884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.450 [2024-11-20 10:02:36.899891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.450 [2024-11-20 10:02:36.899897] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.450 [2024-11-20 10:02:36.899901] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.450 [2024-11-20 10:02:36.899905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.450 [2024-11-20 10:02:36.909697] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.450 [2024-11-20 10:02:36.909715] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.450 [2024-11-20 10:02:36.909720] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.909725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.450 [2024-11-20 10:02:36.909739] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.909855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.450 [2024-11-20 10:02:36.909867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.450 [2024-11-20 10:02:36.909875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.450 [2024-11-20 10:02:36.909886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.909895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.450 [2024-11-20 10:02:36.909901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.450 [2024-11-20 10:02:36.909908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.450 [2024-11-20 10:02:36.909914] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.450 [2024-11-20 10:02:36.909919] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.450 [2024-11-20 10:02:36.909923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.450 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:03.450 [2024-11-20 10:02:36.919769] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.450 [2024-11-20 10:02:36.919783] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.450 [2024-11-20 10:02:36.919787] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.919792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.450 [2024-11-20 10:02:36.919809] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.919976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.450 [2024-11-20 10:02:36.919987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.450 [2024-11-20 10:02:36.919994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.450 [2024-11-20 10:02:36.920004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.920013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.450 [2024-11-20 10:02:36.920019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.450 [2024-11-20 10:02:36.920025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.450 [2024-11-20 10:02:36.920030] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.450 [2024-11-20 10:02:36.920035] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.450 [2024-11-20 10:02:36.920038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.450 [2024-11-20 10:02:36.929839] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.450 [2024-11-20 10:02:36.929852] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.450 [2024-11-20 10:02:36.929857] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.929861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.450 [2024-11-20 10:02:36.929875] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.450 [2024-11-20 10:02:36.929976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.450 [2024-11-20 10:02:36.929987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.450 [2024-11-20 10:02:36.929994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.450 [2024-11-20 10:02:36.930004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.450 [2024-11-20 10:02:36.930013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.451 [2024-11-20 10:02:36.930019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.451 [2024-11-20 10:02:36.930026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.451 [2024-11-20 10:02:36.930032] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.451 [2024-11-20 10:02:36.930037] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.451 [2024-11-20 10:02:36.930042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.451 [2024-11-20 10:02:36.939906] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.451 [2024-11-20 10:02:36.939916] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.451 [2024-11-20 10:02:36.939920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.939932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.451 [2024-11-20 10:02:36.939945] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.940054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.451 [2024-11-20 10:02:36.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.451 [2024-11-20 10:02:36.940072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.451 [2024-11-20 10:02:36.940081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.451 [2024-11-20 10:02:36.940095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.451 [2024-11-20 10:02:36.940102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.451 [2024-11-20 10:02:36.940108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.451 [2024-11-20 10:02:36.940113] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.451 [2024-11-20 10:02:36.940118] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.451 [2024-11-20 10:02:36.940121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.451 [2024-11-20 10:02:36.949976] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.451 [2024-11-20 10:02:36.949988] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.451 [2024-11-20 10:02:36.949993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.949997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.451 [2024-11-20 10:02:36.950010] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.950110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.451 [2024-11-20 10:02:36.950122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.451 [2024-11-20 10:02:36.950129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.451 [2024-11-20 10:02:36.950139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.451 [2024-11-20 10:02:36.950148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.451 [2024-11-20 10:02:36.950154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.451 [2024-11-20 10:02:36.950161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.451 [2024-11-20 10:02:36.950166] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.451 [2024-11-20 10:02:36.950170] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.451 [2024-11-20 10:02:36.950174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.451 [2024-11-20 10:02:36.960041] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.451 [2024-11-20 10:02:36.960055] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.451 [2024-11-20 10:02:36.960059] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.960063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.451 [2024-11-20 10:02:36.960075] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.960303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.451 [2024-11-20 10:02:36.960316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.451 [2024-11-20 10:02:36.960323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.451 [2024-11-20 10:02:36.960334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.451 [2024-11-20 10:02:36.960350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.451 [2024-11-20 10:02:36.960357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.451 [2024-11-20 10:02:36.960364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.451 [2024-11-20 10:02:36.960370] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.451 [2024-11-20 10:02:36.960376] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.451 [2024-11-20 10:02:36.960380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.451 [2024-11-20 10:02:36.970106] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.451 [2024-11-20 10:02:36.970120] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.451 [2024-11-20 10:02:36.970124] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.970128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.451 [2024-11-20 10:02:36.970143] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.970282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.451 [2024-11-20 10:02:36.970293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.451 [2024-11-20 10:02:36.970300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.451 [2024-11-20 10:02:36.970309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.451 [2024-11-20 10:02:36.970319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.451 [2024-11-20 10:02:36.970324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.451 [2024-11-20 10:02:36.970331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.451 [2024-11-20 10:02:36.970336] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.451 [2024-11-20 10:02:36.970341] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.451 [2024-11-20 10:02:36.970345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.451 10:02:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.451 [2024-11-20 10:02:36.980174] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:03.451 [2024-11-20 10:02:36.980186] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:03.451 [2024-11-20 10:02:36.980190] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.980194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.451 [2024-11-20 10:02:36.980211] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.451 [2024-11-20 10:02:36.980379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:03.451 [2024-11-20 10:02:36.980392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5390 with addr=10.0.0.2, port=4420 00:24:03.451 [2024-11-20 10:02:36.980400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5390 is same with the state(6) to be set 00:24:03.452 [2024-11-20 10:02:36.980411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5390 (9): Bad file descriptor 00:24:03.452 [2024-11-20 10:02:36.980425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.452 [2024-11-20 10:02:36.980432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.452 [2024-11-20 10:02:36.980439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.452 [2024-11-20 10:02:36.980445] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.452 [2024-11-20 10:02:36.980449] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.452 [2024-11-20 10:02:36.980453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.452 [2024-11-20 10:02:36.984871] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:03.452 [2024-11-20 10:02:36.984887] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:03.452 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:24:03.452 10:02:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:04.833 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.834 10:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.771 [2024-11-20 10:02:39.286308] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:05.771 [2024-11-20 10:02:39.286324] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:05.771 [2024-11-20 10:02:39.286334] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:06.029 [2024-11-20 10:02:39.412727] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:06.029 [2024-11-20 10:02:39.511493] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:06.029 [2024-11-20 10:02:39.512089] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xf0ea00:1 started. 00:24:06.029 [2024-11-20 10:02:39.513650] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:06.029 [2024-11-20 10:02:39.513674] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.029 [2024-11-20 10:02:39.515981] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xf0ea00 was disconnected and freed. delete nvme_qpair. 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.029 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.029 request: 00:24:06.029 { 00:24:06.029 "name": "nvme", 00:24:06.029 "trtype": "tcp", 00:24:06.029 "traddr": "10.0.0.2", 00:24:06.030 "adrfam": "ipv4", 00:24:06.030 "trsvcid": "8009", 00:24:06.030 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:06.030 "wait_for_attach": true, 00:24:06.030 "method": "bdev_nvme_start_discovery", 00:24:06.030 "req_id": 1 00:24:06.030 } 00:24:06.030 Got JSON-RPC error response 00:24:06.030 response: 00:24:06.030 { 00:24:06.030 "code": -17, 00:24:06.030 "message": "File exists" 00:24:06.030 } 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.030 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.289 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.289 request: 00:24:06.289 { 00:24:06.289 "name": "nvme_second", 00:24:06.289 "trtype": "tcp", 00:24:06.289 "traddr": "10.0.0.2", 00:24:06.289 "adrfam": "ipv4", 00:24:06.289 "trsvcid": "8009", 00:24:06.289 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:06.289 "wait_for_attach": true, 00:24:06.290 "method": "bdev_nvme_start_discovery", 00:24:06.290 "req_id": 1 00:24:06.290 } 00:24:06.290 Got JSON-RPC error response 00:24:06.290 response: 00:24:06.290 { 00:24:06.290 "code": -17, 00:24:06.290 "message": "File exists" 00:24:06.290 } 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.290 10:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:07.305 [2024-11-20 10:02:40.757406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.305 [2024-11-20 10:02:40.757438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1ffd0 with addr=10.0.0.2, port=8010 00:24:07.305 [2024-11-20 10:02:40.757454] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:07.305 [2024-11-20 10:02:40.757460] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:07.305 [2024-11-20 10:02:40.757467] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:08.243 [2024-11-20 10:02:41.759862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.243 [2024-11-20 10:02:41.759889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1ffd0 with addr=10.0.0.2, port=8010 00:24:08.243 [2024-11-20 10:02:41.759900] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:08.243 [2024-11-20 10:02:41.759907] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:08.243 [2024-11-20 10:02:41.759913] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:09.621 [2024-11-20 10:02:42.762066] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:09.621 request: 00:24:09.621 { 00:24:09.621 "name": "nvme_second", 00:24:09.621 "trtype": "tcp", 00:24:09.621 "traddr": "10.0.0.2", 00:24:09.621 "adrfam": "ipv4", 00:24:09.621 "trsvcid": "8010", 00:24:09.621 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:09.621 "wait_for_attach": false, 00:24:09.621 "attach_timeout_ms": 3000, 00:24:09.621 "method": "bdev_nvme_start_discovery", 00:24:09.621 "req_id": 1 00:24:09.621 } 00:24:09.621 Got JSON-RPC error response 00:24:09.621 response: 00:24:09.621 { 00:24:09.621 "code": -110, 00:24:09.621 "message": "Connection timed out" 00:24:09.621 } 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:09.621 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2759997 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.622 rmmod nvme_tcp 00:24:09.622 rmmod nvme_fabrics 00:24:09.622 rmmod nvme_keyring 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2759974 ']' 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2759974 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2759974 ']' 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2759974 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759974 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759974' 00:24:09.622 killing process with pid 2759974 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2759974 00:24:09.622 10:02:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2759974 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.622 10:02:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.158 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:12.158 00:24:12.158 real 0m18.259s 00:24:12.158 user 0m22.531s 00:24:12.159 sys 0m5.906s 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.159 ************************************ 00:24:12.159 END TEST nvmf_host_discovery 00:24:12.159 ************************************ 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.159 ************************************ 00:24:12.159 START TEST nvmf_host_multipath_status 00:24:12.159 ************************************ 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:12.159 * Looking for test storage... 00:24:12.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.159 --rc genhtml_branch_coverage=1 00:24:12.159 --rc genhtml_function_coverage=1 00:24:12.159 --rc genhtml_legend=1 00:24:12.159 --rc geninfo_all_blocks=1 00:24:12.159 --rc geninfo_unexecuted_blocks=1 00:24:12.159 00:24:12.159 ' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.159 --rc genhtml_branch_coverage=1 00:24:12.159 --rc genhtml_function_coverage=1 00:24:12.159 --rc genhtml_legend=1 00:24:12.159 --rc geninfo_all_blocks=1 00:24:12.159 --rc geninfo_unexecuted_blocks=1 00:24:12.159 00:24:12.159 ' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.159 --rc genhtml_branch_coverage=1 00:24:12.159 --rc genhtml_function_coverage=1 00:24:12.159 --rc genhtml_legend=1 00:24:12.159 --rc geninfo_all_blocks=1 00:24:12.159 --rc geninfo_unexecuted_blocks=1 00:24:12.159 00:24:12.159 ' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:12.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.159 --rc genhtml_branch_coverage=1 00:24:12.159 --rc genhtml_function_coverage=1 00:24:12.159 --rc genhtml_legend=1 00:24:12.159 --rc geninfo_all_blocks=1 00:24:12.159 --rc geninfo_unexecuted_blocks=1 00:24:12.159 00:24:12.159 ' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.159 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.160 10:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.730 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:18.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:18.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:18.731 Found net devices under 0000:86:00.0: cvl_0_0 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:18.731 Found net devices under 0000:86:00.1: cvl_0_1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:18.731 00:24:18.731 --- 10.0.0.2 ping statistics --- 00:24:18.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.731 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:18.731 00:24:18.731 --- 10.0.0.1 ping statistics --- 00:24:18.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.731 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.731 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2765110 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2765110 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2765110 ']' 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:18.732 [2024-11-20 10:02:51.427466] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:18.732 [2024-11-20 10:02:51.427508] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.732 [2024-11-20 10:02:51.507274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:18.732 [2024-11-20 10:02:51.548734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.732 [2024-11-20 10:02:51.548771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.732 [2024-11-20 10:02:51.548778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.732 [2024-11-20 10:02:51.548785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.732 [2024-11-20 10:02:51.548792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.732 [2024-11-20 10:02:51.550042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.732 [2024-11-20 10:02:51.550042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2765110 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:18.732 [2024-11-20 10:02:51.849903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.732 10:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:18.732 Malloc0 00:24:18.732 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:18.732 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.991 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.251 [2024-11-20 10:02:52.661446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.251 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.510 [2024-11-20 10:02:52.862014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2765410 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2765410 /var/tmp/bdevperf.sock 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2765410 ']' 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.510 10:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:19.769 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.769 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:19.769 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:19.769 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:20.338 Nvme0n1 00:24:20.338 10:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:20.597 Nvme0n1 00:24:20.597 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:20.597 10:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:23.132 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:23.132 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:23.132 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.132 10:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:24.080 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:24.080 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.080 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.080 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.358 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.358 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:24.358 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.358 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.627 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.628 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.628 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.628 10:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.628 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.628 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.628 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.628 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.886 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.886 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.886 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.886 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.145 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.145 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.145 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.145 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.404 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.404 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:25.404 10:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:25.662 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.925 10:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:26.862 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:26.862 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.862 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.862 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.121 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.121 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:27.121 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.121 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.381 10:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.640 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.640 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.640 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.640 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.899 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.899 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:27.899 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.899 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.158 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.158 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:28.158 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:28.158 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:28.417 10:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:29.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:29.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.353 10:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.611 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.611 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:29.611 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.611 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.869 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.869 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.869 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.869 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.128 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.386 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.386 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.386 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.386 10:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.644 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.644 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:30.644 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.902 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:31.160 10:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:32.094 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:32.094 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.094 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.094 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.355 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.355 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:32.355 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.355 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.615 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.615 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.615 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.615 10:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.615 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.615 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.615 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.615 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.874 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.874 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.874 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.874 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.132 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.133 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:33.133 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.133 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.391 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.391 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:33.391 10:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:33.649 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:33.649 10:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.024 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.282 10:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.540 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.540 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:35.540 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.540 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.813 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.813 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:35.813 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.813 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.074 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.074 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:36.074 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:36.074 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:36.332 10:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:37.263 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:37.263 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:37.264 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.264 10:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.521 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.521 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.521 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.521 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.779 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.779 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.779 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.779 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.038 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.038 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.038 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.038 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.297 10:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.556 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.556 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:38.814 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:38.814 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.072 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.331 10:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:40.267 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:40.267 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.267 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.267 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.526 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.526 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:40.526 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.526 10:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.784 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.785 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.785 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.043 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.043 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.043 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.043 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.303 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.303 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.303 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.303 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.561 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.561 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:41.561 10:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:41.820 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:41.820 10:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.198 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.457 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.457 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.457 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.457 10:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.457 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.457 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.457 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.457 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.716 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.716 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:43.716 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.716 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:43.975 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.975 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:43.975 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.975 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.233 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.233 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:44.233 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.491 10:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:44.750 10:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:45.687 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:45.687 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.687 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.687 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:45.945 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.203 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.203 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.203 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.203 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.462 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.462 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.462 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.462 10:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.720 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.720 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:46.720 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.720 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:46.978 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.978 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:46.978 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.237 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.237 10:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:48.613 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:48.613 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.613 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.613 10:03:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.613 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.614 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.614 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.614 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.873 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.133 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.133 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.133 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.133 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.391 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.391 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.391 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.391 10:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2765410 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2765410 ']' 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2765410 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765410 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765410' 00:24:49.650 killing process with pid 2765410 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2765410 00:24:49.650 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2765410 00:24:49.650 { 00:24:49.650 "results": [ 00:24:49.650 { 00:24:49.650 "job": "Nvme0n1", 00:24:49.650 "core_mask": "0x4", 00:24:49.650 "workload": "verify", 00:24:49.650 "status": "terminated", 00:24:49.650 "verify_range": { 00:24:49.650 "start": 0, 00:24:49.650 "length": 16384 00:24:49.650 }, 00:24:49.650 "queue_depth": 128, 00:24:49.650 "io_size": 4096, 00:24:49.650 "runtime": 28.837073, 00:24:49.650 "iops": 10692.347312780323, 00:24:49.650 "mibps": 41.766981690548135, 00:24:49.650 "io_failed": 0, 00:24:49.650 "io_timeout": 0, 00:24:49.650 "avg_latency_us": 11951.821382733988, 00:24:49.650 "min_latency_us": 137.50857142857143, 00:24:49.650 "max_latency_us": 3019898.88 00:24:49.650 } 00:24:49.650 ], 00:24:49.650 "core_count": 1 00:24:49.650 } 00:24:49.914 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2765410 00:24:49.914 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:49.914 [2024-11-20 10:02:52.920892] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:49.914 [2024-11-20 10:02:52.920939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765410 ] 00:24:49.914 [2024-11-20 10:02:52.994469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.914 [2024-11-20 10:02:53.036198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.914 Running I/O for 90 seconds... 00:24:49.914 11540.00 IOPS, 45.08 MiB/s [2024-11-20T09:03:23.496Z] 11561.50 IOPS, 45.16 MiB/s [2024-11-20T09:03:23.496Z] 11605.67 IOPS, 45.33 MiB/s [2024-11-20T09:03:23.496Z] 11555.00 IOPS, 45.14 MiB/s [2024-11-20T09:03:23.496Z] 11550.80 IOPS, 45.12 MiB/s [2024-11-20T09:03:23.496Z] 11538.33 IOPS, 45.07 MiB/s [2024-11-20T09:03:23.496Z] 11495.29 IOPS, 44.90 MiB/s [2024-11-20T09:03:23.496Z] 11489.50 IOPS, 44.88 MiB/s [2024-11-20T09:03:23.496Z] 11485.56 IOPS, 44.87 MiB/s [2024-11-20T09:03:23.496Z] 11484.10 IOPS, 44.86 MiB/s [2024-11-20T09:03:23.496Z] 11501.09 IOPS, 44.93 MiB/s [2024-11-20T09:03:23.496Z] 11498.25 IOPS, 44.92 MiB/s [2024-11-20T09:03:23.496Z] [2024-11-20 10:03:07.001155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.914 [2024-11-20 10:03:07.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:49.914 [2024-11-20 10:03:07.001802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.915 [2024-11-20 10:03:07.001905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.915 [2024-11-20 10:03:07.001925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.001984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.001996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:49.915 [2024-11-20 10:03:07.002822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.915 [2024-11-20 10:03:07.002830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.002985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.002992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.916 [2024-11-20 10:03:07.003758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:49.916 [2024-11-20 10:03:07.003774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.003985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.003992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:07.004147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:07.004316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:07.004326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:49.917 11304.69 IOPS, 44.16 MiB/s [2024-11-20T09:03:23.499Z] 10497.21 IOPS, 41.00 MiB/s [2024-11-20T09:03:23.499Z] 9797.40 IOPS, 38.27 MiB/s [2024-11-20T09:03:23.499Z] 9339.38 IOPS, 36.48 MiB/s [2024-11-20T09:03:23.499Z] 9442.71 IOPS, 36.89 MiB/s [2024-11-20T09:03:23.499Z] 9547.11 IOPS, 37.29 MiB/s [2024-11-20T09:03:23.499Z] 9763.84 IOPS, 38.14 MiB/s [2024-11-20T09:03:23.499Z] 9966.35 IOPS, 38.93 MiB/s [2024-11-20T09:03:23.499Z] 10140.43 IOPS, 39.61 MiB/s [2024-11-20T09:03:23.499Z] 10208.14 IOPS, 39.88 MiB/s [2024-11-20T09:03:23.499Z] 10261.61 IOPS, 40.08 MiB/s [2024-11-20T09:03:23.499Z] 10326.17 IOPS, 40.34 MiB/s [2024-11-20T09:03:23.499Z] 10455.96 IOPS, 40.84 MiB/s [2024-11-20T09:03:23.499Z] 10577.00 IOPS, 41.32 MiB/s [2024-11-20T09:03:23.499Z] [2024-11-20 10:03:20.768540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.768658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.768677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.768695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.768713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.768762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.768768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.769344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.917 [2024-11-20 10:03:20.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.769376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.769383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.917 [2024-11-20 10:03:20.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:49.917 [2024-11-20 10:03:20.769415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.918 [2024-11-20 10:03:20.769849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.769899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.769906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.770233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.770244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:49.918 [2024-11-20 10:03:20.770262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.918 [2024-11-20 10:03:20.770269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.919 [2024-11-20 10:03:20.770553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:49.919 [2024-11-20 10:03:20.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.919 [2024-11-20 10:03:20.770628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:49.919 10650.96 IOPS, 41.61 MiB/s [2024-11-20T09:03:23.501Z] 10675.39 IOPS, 41.70 MiB/s [2024-11-20T09:03:23.501Z] Received shutdown signal, test time was about 28.837716 seconds 00:24:49.919 00:24:49.919 Latency(us) 00:24:49.919 [2024-11-20T09:03:23.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.919 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:49.919 Verification LBA range: start 0x0 length 0x4000 00:24:49.919 Nvme0n1 : 28.84 10692.35 41.77 0.00 0.00 11951.82 137.51 3019898.88 00:24:49.919 [2024-11-20T09:03:23.501Z] =================================================================================================================== 00:24:49.919 [2024-11-20T09:03:23.501Z] Total : 10692.35 41.77 0.00 0.00 11951.82 137.51 3019898.88 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.919 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.919 rmmod nvme_tcp 00:24:49.919 rmmod nvme_fabrics 00:24:50.178 rmmod nvme_keyring 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2765110 ']' 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2765110 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2765110 ']' 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2765110 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765110 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765110' 00:24:50.178 killing process with pid 2765110 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2765110 00:24:50.178 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2765110 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.437 10:03:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.341 00:24:52.341 real 0m40.593s 00:24:52.341 user 1m49.820s 00:24:52.341 sys 0m11.742s 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:52.341 ************************************ 00:24:52.341 END TEST nvmf_host_multipath_status 00:24:52.341 ************************************ 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.341 ************************************ 00:24:52.341 START TEST nvmf_discovery_remove_ifc 00:24:52.341 ************************************ 00:24:52.341 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:52.600 * Looking for test storage... 00:24:52.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.600 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.600 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.600 10:03:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:52.600 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.601 --rc genhtml_branch_coverage=1 00:24:52.601 --rc genhtml_function_coverage=1 00:24:52.601 --rc genhtml_legend=1 00:24:52.601 --rc geninfo_all_blocks=1 00:24:52.601 --rc geninfo_unexecuted_blocks=1 00:24:52.601 00:24:52.601 ' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.601 --rc genhtml_branch_coverage=1 00:24:52.601 --rc genhtml_function_coverage=1 00:24:52.601 --rc genhtml_legend=1 00:24:52.601 --rc geninfo_all_blocks=1 00:24:52.601 --rc geninfo_unexecuted_blocks=1 00:24:52.601 00:24:52.601 ' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.601 --rc genhtml_branch_coverage=1 00:24:52.601 --rc genhtml_function_coverage=1 00:24:52.601 --rc genhtml_legend=1 00:24:52.601 --rc geninfo_all_blocks=1 00:24:52.601 --rc geninfo_unexecuted_blocks=1 00:24:52.601 00:24:52.601 ' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.601 --rc genhtml_branch_coverage=1 00:24:52.601 --rc genhtml_function_coverage=1 00:24:52.601 --rc genhtml_legend=1 00:24:52.601 --rc geninfo_all_blocks=1 00:24:52.601 --rc geninfo_unexecuted_blocks=1 00:24:52.601 00:24:52.601 ' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.601 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.602 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.602 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.602 10:03:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.169 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.170 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.170 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.170 10:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:24:59.170 00:24:59.170 --- 10.0.0.2 ping statistics --- 00:24:59.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.170 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:59.170 00:24:59.170 --- 10.0.0.1 ping statistics --- 00:24:59.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.170 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2774618 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2774618 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:59.170 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2774618 ']' 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 [2024-11-20 10:03:32.124593] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:59.171 [2024-11-20 10:03:32.124635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.171 [2024-11-20 10:03:32.201368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.171 [2024-11-20 10:03:32.241750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.171 [2024-11-20 10:03:32.241785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.171 [2024-11-20 10:03:32.241792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.171 [2024-11-20 10:03:32.241797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.171 [2024-11-20 10:03:32.241802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.171 [2024-11-20 10:03:32.242372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 [2024-11-20 10:03:32.383972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.171 [2024-11-20 10:03:32.392136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:59.171 null0 00:24:59.171 [2024-11-20 10:03:32.424137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2774644 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2774644 /tmp/host.sock 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2774644 ']' 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:59.171 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 [2024-11-20 10:03:32.493241] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:24:59.171 [2024-11-20 10:03:32.493281] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774644 ] 00:24:59.171 [2024-11-20 10:03:32.566155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.171 [2024-11-20 10:03:32.609142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.171 10:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.549 [2024-11-20 10:03:33.740269] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:00.549 [2024-11-20 10:03:33.740289] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:00.549 [2024-11-20 10:03:33.740304] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.549 [2024-11-20 10:03:33.867713] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:00.549 [2024-11-20 10:03:33.969497] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:00.549 [2024-11-20 10:03:33.970200] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x232d9f0:1 started. 00:25:00.549 [2024-11-20 10:03:33.971512] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:00.549 [2024-11-20 10:03:33.971552] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:00.549 [2024-11-20 10:03:33.971570] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:00.549 [2024-11-20 10:03:33.971582] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:00.549 [2024-11-20 10:03:33.971598] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.549 [2024-11-20 10:03:33.979387] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x232d9f0 was disconnected and freed. delete nvme_qpair. 00:25:00.549 10:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.549 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:00.549 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:00.549 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:00.549 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.808 10:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.743 10:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.680 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.938 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:02.938 10:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.875 10:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:04.811 10:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.184 [2024-11-20 10:03:39.413223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:06.184 [2024-11-20 10:03:39.413255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.184 [2024-11-20 10:03:39.413265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.184 [2024-11-20 10:03:39.413274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.184 [2024-11-20 10:03:39.413281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.184 [2024-11-20 10:03:39.413288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.184 [2024-11-20 10:03:39.413294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.184 [2024-11-20 10:03:39.413301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.184 [2024-11-20 10:03:39.413308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.184 [2024-11-20 10:03:39.413315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.184 [2024-11-20 10:03:39.413322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.184 [2024-11-20 10:03:39.413329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a220 is same with the state(6) to be set 00:25:06.184 [2024-11-20 10:03:39.423250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a220 (9): Bad file descriptor 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:06.184 10:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:06.184 [2024-11-20 10:03:39.433285] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.184 [2024-11-20 10:03:39.433297] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.184 [2024-11-20 10:03:39.433302] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.184 [2024-11-20 10:03:39.433306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.184 [2024-11-20 10:03:39.433324] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.119 [2024-11-20 10:03:40.487413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:07.119 [2024-11-20 10:03:40.487501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a220 with addr=10.0.0.2, port=4420 00:25:07.119 [2024-11-20 10:03:40.487534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a220 is same with the state(6) to be set 00:25:07.119 [2024-11-20 10:03:40.487593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a220 (9): Bad file descriptor 00:25:07.119 [2024-11-20 10:03:40.488558] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:07.119 [2024-11-20 10:03:40.488622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:07.119 [2024-11-20 10:03:40.488644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:07.119 [2024-11-20 10:03:40.488666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:07.119 [2024-11-20 10:03:40.488687] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:07.119 [2024-11-20 10:03:40.488704] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:07.119 [2024-11-20 10:03:40.488718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:07.119 [2024-11-20 10:03:40.488742] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:07.119 [2024-11-20 10:03:40.488756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.119 10:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.056 [2024-11-20 10:03:41.491272] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:08.056 [2024-11-20 10:03:41.491293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:08.056 [2024-11-20 10:03:41.491304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:08.056 [2024-11-20 10:03:41.491311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:08.056 [2024-11-20 10:03:41.491318] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:08.056 [2024-11-20 10:03:41.491324] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:08.056 [2024-11-20 10:03:41.491345] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:08.056 [2024-11-20 10:03:41.491348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:08.056 [2024-11-20 10:03:41.491370] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:08.056 [2024-11-20 10:03:41.491390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.056 [2024-11-20 10:03:41.491400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.056 [2024-11-20 10:03:41.491409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.056 [2024-11-20 10:03:41.491421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.056 [2024-11-20 10:03:41.491429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.056 [2024-11-20 10:03:41.491436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.056 [2024-11-20 10:03:41.491443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.056 [2024-11-20 10:03:41.491449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.056 [2024-11-20 10:03:41.491457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.056 [2024-11-20 10:03:41.491464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.056 [2024-11-20 10:03:41.491470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:08.056 [2024-11-20 10:03:41.491863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f9900 (9): Bad file descriptor 00:25:08.056 [2024-11-20 10:03:41.492874] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:08.056 [2024-11-20 10:03:41.492884] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.056 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:08.315 10:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.303 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:09.304 10:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.282 [2024-11-20 10:03:43.546673] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:10.282 [2024-11-20 10:03:43.546692] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:10.282 [2024-11-20 10:03:43.546704] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.282 [2024-11-20 10:03:43.675096] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:10.282 [2024-11-20 10:03:43.734665] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:10.282 [2024-11-20 10:03:43.735276] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x22fe760:1 started. 00:25:10.282 [2024-11-20 10:03:43.736300] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:10.283 [2024-11-20 10:03:43.736330] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:10.283 [2024-11-20 10:03:43.736346] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:10.283 [2024-11-20 10:03:43.736358] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:10.283 [2024-11-20 10:03:43.736366] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.283 [2024-11-20 10:03:43.744352] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x22fe760 was disconnected and freed. delete nvme_qpair. 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2774644 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2774644 ']' 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2774644 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774644 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774644' 00:25:10.283 killing process with pid 2774644 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2774644 00:25:10.283 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2774644 00:25:10.541 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:10.541 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.541 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:10.541 10:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.541 rmmod nvme_tcp 00:25:10.541 rmmod nvme_fabrics 00:25:10.541 rmmod nvme_keyring 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2774618 ']' 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2774618 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2774618 ']' 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2774618 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774618 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774618' 00:25:10.541 killing process with pid 2774618 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2774618 00:25:10.541 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2774618 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.800 10:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.333 00:25:13.333 real 0m20.436s 00:25:13.333 user 0m24.578s 00:25:13.333 sys 0m5.858s 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.333 ************************************ 00:25:13.333 END TEST nvmf_discovery_remove_ifc 00:25:13.333 ************************************ 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.333 ************************************ 00:25:13.333 START TEST nvmf_identify_kernel_target 00:25:13.333 ************************************ 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.333 * Looking for test storage... 00:25:13.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.333 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.334 --rc genhtml_branch_coverage=1 00:25:13.334 --rc genhtml_function_coverage=1 00:25:13.334 --rc genhtml_legend=1 00:25:13.334 --rc geninfo_all_blocks=1 00:25:13.334 --rc geninfo_unexecuted_blocks=1 00:25:13.334 00:25:13.334 ' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.334 --rc genhtml_branch_coverage=1 00:25:13.334 --rc genhtml_function_coverage=1 00:25:13.334 --rc genhtml_legend=1 00:25:13.334 --rc geninfo_all_blocks=1 00:25:13.334 --rc geninfo_unexecuted_blocks=1 00:25:13.334 00:25:13.334 ' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.334 --rc genhtml_branch_coverage=1 00:25:13.334 --rc genhtml_function_coverage=1 00:25:13.334 --rc genhtml_legend=1 00:25:13.334 --rc geninfo_all_blocks=1 00:25:13.334 --rc geninfo_unexecuted_blocks=1 00:25:13.334 00:25:13.334 ' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.334 --rc genhtml_branch_coverage=1 00:25:13.334 --rc genhtml_function_coverage=1 00:25:13.334 --rc genhtml_legend=1 00:25:13.334 --rc geninfo_all_blocks=1 00:25:13.334 --rc geninfo_unexecuted_blocks=1 00:25:13.334 00:25:13.334 ' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.334 10:03:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.904 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:19.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:19.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:19.905 Found net devices under 0000:86:00.0: cvl_0_0 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:19.905 Found net devices under 0000:86:00.1: cvl_0_1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:25:19.905 00:25:19.905 --- 10.0.0.2 ping statistics --- 00:25:19.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.905 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:19.905 00:25:19.905 --- 10.0.0.1 ping statistics --- 00:25:19.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.905 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:19.905 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:19.906 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:19.906 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:19.906 10:03:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:21.810 Waiting for block devices as requested 00:25:21.810 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:22.069 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:22.069 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:22.069 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:22.328 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:22.328 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:22.328 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:22.588 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:22.588 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:22.588 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:22.588 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:22.848 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:22.848 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:22.848 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:23.107 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:23.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:23.107 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:23.365 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:23.365 No valid GPT data, bailing 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:23.366 00:25:23.366 Discovery Log Number of Records 2, Generation counter 2 00:25:23.366 =====Discovery Log Entry 0====== 00:25:23.366 trtype: tcp 00:25:23.366 adrfam: ipv4 00:25:23.366 subtype: current discovery subsystem 00:25:23.366 treq: not specified, sq flow control disable supported 00:25:23.366 portid: 1 00:25:23.366 trsvcid: 4420 00:25:23.366 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:23.366 traddr: 10.0.0.1 00:25:23.366 eflags: none 00:25:23.366 sectype: none 00:25:23.366 =====Discovery Log Entry 1====== 00:25:23.366 trtype: tcp 00:25:23.366 adrfam: ipv4 00:25:23.366 subtype: nvme subsystem 00:25:23.366 treq: not specified, sq flow control disable supported 00:25:23.366 portid: 1 00:25:23.366 trsvcid: 4420 00:25:23.366 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:23.366 traddr: 10.0.0.1 00:25:23.366 eflags: none 00:25:23.366 sectype: none 00:25:23.366 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:23.366 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:23.626 ===================================================== 00:25:23.626 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:23.626 ===================================================== 00:25:23.626 Controller Capabilities/Features 00:25:23.626 ================================ 00:25:23.626 Vendor ID: 0000 00:25:23.626 Subsystem Vendor ID: 0000 00:25:23.626 Serial Number: c9348029b0e8cbc7a1c9 00:25:23.626 Model Number: Linux 00:25:23.626 Firmware Version: 6.8.9-20 00:25:23.626 Recommended Arb Burst: 0 00:25:23.626 IEEE OUI Identifier: 00 00 00 00:25:23.626 Multi-path I/O 00:25:23.626 May have multiple subsystem ports: No 00:25:23.626 May have multiple controllers: No 00:25:23.626 Associated with SR-IOV VF: No 00:25:23.626 Max Data Transfer Size: Unlimited 00:25:23.626 Max Number of Namespaces: 0 00:25:23.626 Max Number of I/O Queues: 1024 00:25:23.626 NVMe Specification Version (VS): 1.3 00:25:23.626 NVMe Specification Version (Identify): 1.3 00:25:23.626 Maximum Queue Entries: 1024 00:25:23.626 Contiguous Queues Required: No 00:25:23.626 Arbitration Mechanisms Supported 00:25:23.626 Weighted Round Robin: Not Supported 00:25:23.626 Vendor Specific: Not Supported 00:25:23.626 Reset Timeout: 7500 ms 00:25:23.626 Doorbell Stride: 4 bytes 00:25:23.626 NVM Subsystem Reset: Not Supported 00:25:23.626 Command Sets Supported 00:25:23.626 NVM Command Set: Supported 00:25:23.626 Boot Partition: Not Supported 00:25:23.626 Memory Page Size Minimum: 4096 bytes 00:25:23.626 Memory Page Size Maximum: 4096 bytes 00:25:23.626 Persistent Memory Region: Not Supported 00:25:23.626 Optional Asynchronous Events Supported 00:25:23.626 Namespace Attribute Notices: Not Supported 00:25:23.626 Firmware Activation Notices: Not Supported 00:25:23.626 ANA Change Notices: Not Supported 00:25:23.626 PLE Aggregate Log Change Notices: Not Supported 00:25:23.626 LBA Status Info Alert Notices: Not Supported 00:25:23.626 EGE Aggregate Log Change Notices: Not Supported 00:25:23.626 Normal NVM Subsystem Shutdown event: Not Supported 00:25:23.626 Zone Descriptor Change Notices: Not Supported 00:25:23.626 Discovery Log Change Notices: Supported 00:25:23.626 Controller Attributes 00:25:23.626 128-bit Host Identifier: Not Supported 00:25:23.626 Non-Operational Permissive Mode: Not Supported 00:25:23.626 NVM Sets: Not Supported 00:25:23.626 Read Recovery Levels: Not Supported 00:25:23.626 Endurance Groups: Not Supported 00:25:23.626 Predictable Latency Mode: Not Supported 00:25:23.626 Traffic Based Keep ALive: Not Supported 00:25:23.626 Namespace Granularity: Not Supported 00:25:23.626 SQ Associations: Not Supported 00:25:23.626 UUID List: Not Supported 00:25:23.626 Multi-Domain Subsystem: Not Supported 00:25:23.626 Fixed Capacity Management: Not Supported 00:25:23.626 Variable Capacity Management: Not Supported 00:25:23.626 Delete Endurance Group: Not Supported 00:25:23.626 Delete NVM Set: Not Supported 00:25:23.626 Extended LBA Formats Supported: Not Supported 00:25:23.626 Flexible Data Placement Supported: Not Supported 00:25:23.626 00:25:23.626 Controller Memory Buffer Support 00:25:23.626 ================================ 00:25:23.626 Supported: No 00:25:23.626 00:25:23.626 Persistent Memory Region Support 00:25:23.626 ================================ 00:25:23.626 Supported: No 00:25:23.626 00:25:23.626 Admin Command Set Attributes 00:25:23.626 ============================ 00:25:23.626 Security Send/Receive: Not Supported 00:25:23.626 Format NVM: Not Supported 00:25:23.626 Firmware Activate/Download: Not Supported 00:25:23.626 Namespace Management: Not Supported 00:25:23.626 Device Self-Test: Not Supported 00:25:23.626 Directives: Not Supported 00:25:23.626 NVMe-MI: Not Supported 00:25:23.626 Virtualization Management: Not Supported 00:25:23.626 Doorbell Buffer Config: Not Supported 00:25:23.626 Get LBA Status Capability: Not Supported 00:25:23.626 Command & Feature Lockdown Capability: Not Supported 00:25:23.626 Abort Command Limit: 1 00:25:23.626 Async Event Request Limit: 1 00:25:23.626 Number of Firmware Slots: N/A 00:25:23.626 Firmware Slot 1 Read-Only: N/A 00:25:23.626 Firmware Activation Without Reset: N/A 00:25:23.626 Multiple Update Detection Support: N/A 00:25:23.626 Firmware Update Granularity: No Information Provided 00:25:23.626 Per-Namespace SMART Log: No 00:25:23.626 Asymmetric Namespace Access Log Page: Not Supported 00:25:23.626 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:23.626 Command Effects Log Page: Not Supported 00:25:23.626 Get Log Page Extended Data: Supported 00:25:23.626 Telemetry Log Pages: Not Supported 00:25:23.626 Persistent Event Log Pages: Not Supported 00:25:23.626 Supported Log Pages Log Page: May Support 00:25:23.626 Commands Supported & Effects Log Page: Not Supported 00:25:23.626 Feature Identifiers & Effects Log Page:May Support 00:25:23.626 NVMe-MI Commands & Effects Log Page: May Support 00:25:23.626 Data Area 4 for Telemetry Log: Not Supported 00:25:23.626 Error Log Page Entries Supported: 1 00:25:23.626 Keep Alive: Not Supported 00:25:23.626 00:25:23.626 NVM Command Set Attributes 00:25:23.626 ========================== 00:25:23.626 Submission Queue Entry Size 00:25:23.626 Max: 1 00:25:23.626 Min: 1 00:25:23.626 Completion Queue Entry Size 00:25:23.626 Max: 1 00:25:23.626 Min: 1 00:25:23.626 Number of Namespaces: 0 00:25:23.626 Compare Command: Not Supported 00:25:23.626 Write Uncorrectable Command: Not Supported 00:25:23.626 Dataset Management Command: Not Supported 00:25:23.626 Write Zeroes Command: Not Supported 00:25:23.626 Set Features Save Field: Not Supported 00:25:23.626 Reservations: Not Supported 00:25:23.626 Timestamp: Not Supported 00:25:23.626 Copy: Not Supported 00:25:23.626 Volatile Write Cache: Not Present 00:25:23.626 Atomic Write Unit (Normal): 1 00:25:23.626 Atomic Write Unit (PFail): 1 00:25:23.626 Atomic Compare & Write Unit: 1 00:25:23.626 Fused Compare & Write: Not Supported 00:25:23.626 Scatter-Gather List 00:25:23.626 SGL Command Set: Supported 00:25:23.626 SGL Keyed: Not Supported 00:25:23.626 SGL Bit Bucket Descriptor: Not Supported 00:25:23.626 SGL Metadata Pointer: Not Supported 00:25:23.626 Oversized SGL: Not Supported 00:25:23.626 SGL Metadata Address: Not Supported 00:25:23.626 SGL Offset: Supported 00:25:23.626 Transport SGL Data Block: Not Supported 00:25:23.626 Replay Protected Memory Block: Not Supported 00:25:23.626 00:25:23.626 Firmware Slot Information 00:25:23.626 ========================= 00:25:23.626 Active slot: 0 00:25:23.626 00:25:23.626 00:25:23.626 Error Log 00:25:23.626 ========= 00:25:23.626 00:25:23.626 Active Namespaces 00:25:23.626 ================= 00:25:23.626 Discovery Log Page 00:25:23.626 ================== 00:25:23.626 Generation Counter: 2 00:25:23.626 Number of Records: 2 00:25:23.626 Record Format: 0 00:25:23.626 00:25:23.626 Discovery Log Entry 0 00:25:23.626 ---------------------- 00:25:23.626 Transport Type: 3 (TCP) 00:25:23.626 Address Family: 1 (IPv4) 00:25:23.626 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:23.626 Entry Flags: 00:25:23.626 Duplicate Returned Information: 0 00:25:23.626 Explicit Persistent Connection Support for Discovery: 0 00:25:23.626 Transport Requirements: 00:25:23.626 Secure Channel: Not Specified 00:25:23.626 Port ID: 1 (0x0001) 00:25:23.626 Controller ID: 65535 (0xffff) 00:25:23.626 Admin Max SQ Size: 32 00:25:23.626 Transport Service Identifier: 4420 00:25:23.626 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:23.626 Transport Address: 10.0.0.1 00:25:23.626 Discovery Log Entry 1 00:25:23.626 ---------------------- 00:25:23.626 Transport Type: 3 (TCP) 00:25:23.626 Address Family: 1 (IPv4) 00:25:23.627 Subsystem Type: 2 (NVM Subsystem) 00:25:23.627 Entry Flags: 00:25:23.627 Duplicate Returned Information: 0 00:25:23.627 Explicit Persistent Connection Support for Discovery: 0 00:25:23.627 Transport Requirements: 00:25:23.627 Secure Channel: Not Specified 00:25:23.627 Port ID: 1 (0x0001) 00:25:23.627 Controller ID: 65535 (0xffff) 00:25:23.627 Admin Max SQ Size: 32 00:25:23.627 Transport Service Identifier: 4420 00:25:23.627 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:23.627 Transport Address: 10.0.0.1 00:25:23.627 10:03:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:23.627 get_feature(0x01) failed 00:25:23.627 get_feature(0x02) failed 00:25:23.627 get_feature(0x04) failed 00:25:23.627 ===================================================== 00:25:23.627 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:23.627 ===================================================== 00:25:23.627 Controller Capabilities/Features 00:25:23.627 ================================ 00:25:23.627 Vendor ID: 0000 00:25:23.627 Subsystem Vendor ID: 0000 00:25:23.627 Serial Number: 9296b13e705cc9b84004 00:25:23.627 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:23.627 Firmware Version: 6.8.9-20 00:25:23.627 Recommended Arb Burst: 6 00:25:23.627 IEEE OUI Identifier: 00 00 00 00:25:23.627 Multi-path I/O 00:25:23.627 May have multiple subsystem ports: Yes 00:25:23.627 May have multiple controllers: Yes 00:25:23.627 Associated with SR-IOV VF: No 00:25:23.627 Max Data Transfer Size: Unlimited 00:25:23.627 Max Number of Namespaces: 1024 00:25:23.627 Max Number of I/O Queues: 128 00:25:23.627 NVMe Specification Version (VS): 1.3 00:25:23.627 NVMe Specification Version (Identify): 1.3 00:25:23.627 Maximum Queue Entries: 1024 00:25:23.627 Contiguous Queues Required: No 00:25:23.627 Arbitration Mechanisms Supported 00:25:23.627 Weighted Round Robin: Not Supported 00:25:23.627 Vendor Specific: Not Supported 00:25:23.627 Reset Timeout: 7500 ms 00:25:23.627 Doorbell Stride: 4 bytes 00:25:23.627 NVM Subsystem Reset: Not Supported 00:25:23.627 Command Sets Supported 00:25:23.627 NVM Command Set: Supported 00:25:23.627 Boot Partition: Not Supported 00:25:23.627 Memory Page Size Minimum: 4096 bytes 00:25:23.627 Memory Page Size Maximum: 4096 bytes 00:25:23.627 Persistent Memory Region: Not Supported 00:25:23.627 Optional Asynchronous Events Supported 00:25:23.627 Namespace Attribute Notices: Supported 00:25:23.627 Firmware Activation Notices: Not Supported 00:25:23.627 ANA Change Notices: Supported 00:25:23.627 PLE Aggregate Log Change Notices: Not Supported 00:25:23.627 LBA Status Info Alert Notices: Not Supported 00:25:23.627 EGE Aggregate Log Change Notices: Not Supported 00:25:23.627 Normal NVM Subsystem Shutdown event: Not Supported 00:25:23.627 Zone Descriptor Change Notices: Not Supported 00:25:23.627 Discovery Log Change Notices: Not Supported 00:25:23.627 Controller Attributes 00:25:23.627 128-bit Host Identifier: Supported 00:25:23.627 Non-Operational Permissive Mode: Not Supported 00:25:23.627 NVM Sets: Not Supported 00:25:23.627 Read Recovery Levels: Not Supported 00:25:23.627 Endurance Groups: Not Supported 00:25:23.627 Predictable Latency Mode: Not Supported 00:25:23.627 Traffic Based Keep ALive: Supported 00:25:23.627 Namespace Granularity: Not Supported 00:25:23.627 SQ Associations: Not Supported 00:25:23.627 UUID List: Not Supported 00:25:23.627 Multi-Domain Subsystem: Not Supported 00:25:23.627 Fixed Capacity Management: Not Supported 00:25:23.627 Variable Capacity Management: Not Supported 00:25:23.627 Delete Endurance Group: Not Supported 00:25:23.627 Delete NVM Set: Not Supported 00:25:23.627 Extended LBA Formats Supported: Not Supported 00:25:23.627 Flexible Data Placement Supported: Not Supported 00:25:23.627 00:25:23.627 Controller Memory Buffer Support 00:25:23.627 ================================ 00:25:23.627 Supported: No 00:25:23.627 00:25:23.627 Persistent Memory Region Support 00:25:23.627 ================================ 00:25:23.627 Supported: No 00:25:23.627 00:25:23.627 Admin Command Set Attributes 00:25:23.627 ============================ 00:25:23.627 Security Send/Receive: Not Supported 00:25:23.627 Format NVM: Not Supported 00:25:23.627 Firmware Activate/Download: Not Supported 00:25:23.627 Namespace Management: Not Supported 00:25:23.627 Device Self-Test: Not Supported 00:25:23.627 Directives: Not Supported 00:25:23.627 NVMe-MI: Not Supported 00:25:23.627 Virtualization Management: Not Supported 00:25:23.627 Doorbell Buffer Config: Not Supported 00:25:23.627 Get LBA Status Capability: Not Supported 00:25:23.627 Command & Feature Lockdown Capability: Not Supported 00:25:23.627 Abort Command Limit: 4 00:25:23.627 Async Event Request Limit: 4 00:25:23.627 Number of Firmware Slots: N/A 00:25:23.627 Firmware Slot 1 Read-Only: N/A 00:25:23.627 Firmware Activation Without Reset: N/A 00:25:23.627 Multiple Update Detection Support: N/A 00:25:23.627 Firmware Update Granularity: No Information Provided 00:25:23.627 Per-Namespace SMART Log: Yes 00:25:23.627 Asymmetric Namespace Access Log Page: Supported 00:25:23.627 ANA Transition Time : 10 sec 00:25:23.627 00:25:23.627 Asymmetric Namespace Access Capabilities 00:25:23.627 ANA Optimized State : Supported 00:25:23.627 ANA Non-Optimized State : Supported 00:25:23.627 ANA Inaccessible State : Supported 00:25:23.627 ANA Persistent Loss State : Supported 00:25:23.627 ANA Change State : Supported 00:25:23.627 ANAGRPID is not changed : No 00:25:23.627 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:23.627 00:25:23.627 ANA Group Identifier Maximum : 128 00:25:23.627 Number of ANA Group Identifiers : 128 00:25:23.627 Max Number of Allowed Namespaces : 1024 00:25:23.627 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:23.627 Command Effects Log Page: Supported 00:25:23.627 Get Log Page Extended Data: Supported 00:25:23.627 Telemetry Log Pages: Not Supported 00:25:23.627 Persistent Event Log Pages: Not Supported 00:25:23.627 Supported Log Pages Log Page: May Support 00:25:23.627 Commands Supported & Effects Log Page: Not Supported 00:25:23.627 Feature Identifiers & Effects Log Page:May Support 00:25:23.627 NVMe-MI Commands & Effects Log Page: May Support 00:25:23.627 Data Area 4 for Telemetry Log: Not Supported 00:25:23.627 Error Log Page Entries Supported: 128 00:25:23.627 Keep Alive: Supported 00:25:23.627 Keep Alive Granularity: 1000 ms 00:25:23.627 00:25:23.627 NVM Command Set Attributes 00:25:23.627 ========================== 00:25:23.627 Submission Queue Entry Size 00:25:23.627 Max: 64 00:25:23.627 Min: 64 00:25:23.627 Completion Queue Entry Size 00:25:23.627 Max: 16 00:25:23.627 Min: 16 00:25:23.627 Number of Namespaces: 1024 00:25:23.627 Compare Command: Not Supported 00:25:23.627 Write Uncorrectable Command: Not Supported 00:25:23.627 Dataset Management Command: Supported 00:25:23.627 Write Zeroes Command: Supported 00:25:23.627 Set Features Save Field: Not Supported 00:25:23.627 Reservations: Not Supported 00:25:23.627 Timestamp: Not Supported 00:25:23.627 Copy: Not Supported 00:25:23.627 Volatile Write Cache: Present 00:25:23.627 Atomic Write Unit (Normal): 1 00:25:23.627 Atomic Write Unit (PFail): 1 00:25:23.627 Atomic Compare & Write Unit: 1 00:25:23.627 Fused Compare & Write: Not Supported 00:25:23.627 Scatter-Gather List 00:25:23.627 SGL Command Set: Supported 00:25:23.627 SGL Keyed: Not Supported 00:25:23.627 SGL Bit Bucket Descriptor: Not Supported 00:25:23.627 SGL Metadata Pointer: Not Supported 00:25:23.627 Oversized SGL: Not Supported 00:25:23.627 SGL Metadata Address: Not Supported 00:25:23.627 SGL Offset: Supported 00:25:23.627 Transport SGL Data Block: Not Supported 00:25:23.627 Replay Protected Memory Block: Not Supported 00:25:23.627 00:25:23.627 Firmware Slot Information 00:25:23.627 ========================= 00:25:23.627 Active slot: 0 00:25:23.627 00:25:23.627 Asymmetric Namespace Access 00:25:23.627 =========================== 00:25:23.627 Change Count : 0 00:25:23.627 Number of ANA Group Descriptors : 1 00:25:23.627 ANA Group Descriptor : 0 00:25:23.627 ANA Group ID : 1 00:25:23.627 Number of NSID Values : 1 00:25:23.627 Change Count : 0 00:25:23.627 ANA State : 1 00:25:23.627 Namespace Identifier : 1 00:25:23.627 00:25:23.627 Commands Supported and Effects 00:25:23.627 ============================== 00:25:23.627 Admin Commands 00:25:23.628 -------------- 00:25:23.628 Get Log Page (02h): Supported 00:25:23.628 Identify (06h): Supported 00:25:23.628 Abort (08h): Supported 00:25:23.628 Set Features (09h): Supported 00:25:23.628 Get Features (0Ah): Supported 00:25:23.628 Asynchronous Event Request (0Ch): Supported 00:25:23.628 Keep Alive (18h): Supported 00:25:23.628 I/O Commands 00:25:23.628 ------------ 00:25:23.628 Flush (00h): Supported 00:25:23.628 Write (01h): Supported LBA-Change 00:25:23.628 Read (02h): Supported 00:25:23.628 Write Zeroes (08h): Supported LBA-Change 00:25:23.628 Dataset Management (09h): Supported 00:25:23.628 00:25:23.628 Error Log 00:25:23.628 ========= 00:25:23.628 Entry: 0 00:25:23.628 Error Count: 0x3 00:25:23.628 Submission Queue Id: 0x0 00:25:23.628 Command Id: 0x5 00:25:23.628 Phase Bit: 0 00:25:23.628 Status Code: 0x2 00:25:23.628 Status Code Type: 0x0 00:25:23.628 Do Not Retry: 1 00:25:23.628 Error Location: 0x28 00:25:23.628 LBA: 0x0 00:25:23.628 Namespace: 0x0 00:25:23.628 Vendor Log Page: 0x0 00:25:23.628 ----------- 00:25:23.628 Entry: 1 00:25:23.628 Error Count: 0x2 00:25:23.628 Submission Queue Id: 0x0 00:25:23.628 Command Id: 0x5 00:25:23.628 Phase Bit: 0 00:25:23.628 Status Code: 0x2 00:25:23.628 Status Code Type: 0x0 00:25:23.628 Do Not Retry: 1 00:25:23.628 Error Location: 0x28 00:25:23.628 LBA: 0x0 00:25:23.628 Namespace: 0x0 00:25:23.628 Vendor Log Page: 0x0 00:25:23.628 ----------- 00:25:23.628 Entry: 2 00:25:23.628 Error Count: 0x1 00:25:23.628 Submission Queue Id: 0x0 00:25:23.628 Command Id: 0x4 00:25:23.628 Phase Bit: 0 00:25:23.628 Status Code: 0x2 00:25:23.628 Status Code Type: 0x0 00:25:23.628 Do Not Retry: 1 00:25:23.628 Error Location: 0x28 00:25:23.628 LBA: 0x0 00:25:23.628 Namespace: 0x0 00:25:23.628 Vendor Log Page: 0x0 00:25:23.628 00:25:23.628 Number of Queues 00:25:23.628 ================ 00:25:23.628 Number of I/O Submission Queues: 128 00:25:23.628 Number of I/O Completion Queues: 128 00:25:23.628 00:25:23.628 ZNS Specific Controller Data 00:25:23.628 ============================ 00:25:23.628 Zone Append Size Limit: 0 00:25:23.628 00:25:23.628 00:25:23.628 Active Namespaces 00:25:23.628 ================= 00:25:23.628 get_feature(0x05) failed 00:25:23.628 Namespace ID:1 00:25:23.628 Command Set Identifier: NVM (00h) 00:25:23.628 Deallocate: Supported 00:25:23.628 Deallocated/Unwritten Error: Not Supported 00:25:23.628 Deallocated Read Value: Unknown 00:25:23.628 Deallocate in Write Zeroes: Not Supported 00:25:23.628 Deallocated Guard Field: 0xFFFF 00:25:23.628 Flush: Supported 00:25:23.628 Reservation: Not Supported 00:25:23.628 Namespace Sharing Capabilities: Multiple Controllers 00:25:23.628 Size (in LBAs): 3125627568 (1490GiB) 00:25:23.628 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:23.628 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:23.628 UUID: 40d8a820-8419-415f-a66e-a5b53812a529 00:25:23.628 Thin Provisioning: Not Supported 00:25:23.628 Per-NS Atomic Units: Yes 00:25:23.628 Atomic Boundary Size (Normal): 0 00:25:23.628 Atomic Boundary Size (PFail): 0 00:25:23.628 Atomic Boundary Offset: 0 00:25:23.628 NGUID/EUI64 Never Reused: No 00:25:23.628 ANA group ID: 1 00:25:23.628 Namespace Write Protected: No 00:25:23.628 Number of LBA Formats: 1 00:25:23.628 Current LBA Format: LBA Format #00 00:25:23.628 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:23.628 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.628 rmmod nvme_tcp 00:25:23.628 rmmod nvme_fabrics 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.628 10:03:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:26.163 10:03:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:28.699 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:28.699 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:30.078 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:30.337 00:25:30.337 real 0m17.268s 00:25:30.337 user 0m4.258s 00:25:30.337 sys 0m8.863s 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.337 ************************************ 00:25:30.337 END TEST nvmf_identify_kernel_target 00:25:30.337 ************************************ 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.337 ************************************ 00:25:30.337 START TEST nvmf_auth_host 00:25:30.337 ************************************ 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:30.337 * Looking for test storage... 00:25:30.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.337 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:30.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.597 --rc genhtml_branch_coverage=1 00:25:30.597 --rc genhtml_function_coverage=1 00:25:30.597 --rc genhtml_legend=1 00:25:30.597 --rc geninfo_all_blocks=1 00:25:30.597 --rc geninfo_unexecuted_blocks=1 00:25:30.597 00:25:30.597 ' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:30.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.597 --rc genhtml_branch_coverage=1 00:25:30.597 --rc genhtml_function_coverage=1 00:25:30.597 --rc genhtml_legend=1 00:25:30.597 --rc geninfo_all_blocks=1 00:25:30.597 --rc geninfo_unexecuted_blocks=1 00:25:30.597 00:25:30.597 ' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:30.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.597 --rc genhtml_branch_coverage=1 00:25:30.597 --rc genhtml_function_coverage=1 00:25:30.597 --rc genhtml_legend=1 00:25:30.597 --rc geninfo_all_blocks=1 00:25:30.597 --rc geninfo_unexecuted_blocks=1 00:25:30.597 00:25:30.597 ' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:30.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.597 --rc genhtml_branch_coverage=1 00:25:30.597 --rc genhtml_function_coverage=1 00:25:30.597 --rc genhtml_legend=1 00:25:30.597 --rc geninfo_all_blocks=1 00:25:30.597 --rc geninfo_unexecuted_blocks=1 00:25:30.597 00:25:30.597 ' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.597 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.598 10:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.167 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.168 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.168 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:25:37.168 00:25:37.168 --- 10.0.0.2 ping statistics --- 00:25:37.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.168 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:37.168 00:25:37.168 --- 10.0.0.1 ping statistics --- 00:25:37.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.168 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.168 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2786647 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2786647 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2786647 ']' 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.169 10:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52b05d669a253819ebbdd0fe1bae31c3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3LI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52b05d669a253819ebbdd0fe1bae31c3 0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52b05d669a253819ebbdd0fe1bae31c3 0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52b05d669a253819ebbdd0fe1bae31c3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3LI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3LI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3LI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=198eb85c8baf54298f0ae5d2e7dcae6750dae8b1ae564d7a925c5e01fca9a096 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0L5 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 198eb85c8baf54298f0ae5d2e7dcae6750dae8b1ae564d7a925c5e01fca9a096 3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 198eb85c8baf54298f0ae5d2e7dcae6750dae8b1ae564d7a925c5e01fca9a096 3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=198eb85c8baf54298f0ae5d2e7dcae6750dae8b1ae564d7a925c5e01fca9a096 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0L5 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0L5 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0L5 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=75bb919926967d296c1927fde9037bc80ac4106e460c52f3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XBI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 75bb919926967d296c1927fde9037bc80ac4106e460c52f3 0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 75bb919926967d296c1927fde9037bc80ac4106e460c52f3 0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=75bb919926967d296c1927fde9037bc80ac4106e460c52f3 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XBI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XBI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XBI 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:37.169 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b99812871865c62d291f5e200f74d64b52d96c4d3a325700 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MNP 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b99812871865c62d291f5e200f74d64b52d96c4d3a325700 2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b99812871865c62d291f5e200f74d64b52d96c4d3a325700 2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b99812871865c62d291f5e200f74d64b52d96c4d3a325700 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MNP 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MNP 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MNP 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=54b5e9a87e4c39d0ff2b0fe003120e94 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Htv 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 54b5e9a87e4c39d0ff2b0fe003120e94 1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 54b5e9a87e4c39d0ff2b0fe003120e94 1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=54b5e9a87e4c39d0ff2b0fe003120e94 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Htv 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Htv 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Htv 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=82d1784af0bb2dbf75e2a37c4c7561c2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.t6P 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 82d1784af0bb2dbf75e2a37c4c7561c2 1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 82d1784af0bb2dbf75e2a37c4c7561c2 1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=82d1784af0bb2dbf75e2a37c4c7561c2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.t6P 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.t6P 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.t6P 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0cfcc50d7d8339b51484d4c979557c9c0c639279e86e131 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tkN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0cfcc50d7d8339b51484d4c979557c9c0c639279e86e131 2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0cfcc50d7d8339b51484d4c979557c9c0c639279e86e131 2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0cfcc50d7d8339b51484d4c979557c9c0c639279e86e131 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tkN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tkN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tkN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df6cc3dc869561c4455e7ae5d9497c22 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YKN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df6cc3dc869561c4455e7ae5d9497c22 0 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df6cc3dc869561c4455e7ae5d9497c22 0 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df6cc3dc869561c4455e7ae5d9497c22 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YKN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YKN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.YKN 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:37.170 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba5232c4d0f7370ccc29e63cb8204e4a035829b04ac455cda899392650804304 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yQi 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba5232c4d0f7370ccc29e63cb8204e4a035829b04ac455cda899392650804304 3 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba5232c4d0f7370ccc29e63cb8204e4a035829b04ac455cda899392650804304 3 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba5232c4d0f7370ccc29e63cb8204e4a035829b04ac455cda899392650804304 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yQi 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yQi 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yQi 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2786647 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2786647 ']' 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.171 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3LI 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0L5 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0L5 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XBI 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MNP ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MNP 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Htv 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.t6P ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t6P 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tkN 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.YKN ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.YKN 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yQi 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.430 10:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.430 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.430 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:37.430 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:37.689 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.690 10:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:40.223 Waiting for block devices as requested 00:25:40.223 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:40.482 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.482 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.482 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:40.482 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.740 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.740 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.740 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.740 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.999 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.999 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.999 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.257 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.257 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:41.257 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:41.257 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.516 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:42.083 No valid GPT data, bailing 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:42.083 00:25:42.083 Discovery Log Number of Records 2, Generation counter 2 00:25:42.083 =====Discovery Log Entry 0====== 00:25:42.083 trtype: tcp 00:25:42.083 adrfam: ipv4 00:25:42.083 subtype: current discovery subsystem 00:25:42.083 treq: not specified, sq flow control disable supported 00:25:42.083 portid: 1 00:25:42.083 trsvcid: 4420 00:25:42.083 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:42.083 traddr: 10.0.0.1 00:25:42.083 eflags: none 00:25:42.083 sectype: none 00:25:42.083 =====Discovery Log Entry 1====== 00:25:42.083 trtype: tcp 00:25:42.083 adrfam: ipv4 00:25:42.083 subtype: nvme subsystem 00:25:42.083 treq: not specified, sq flow control disable supported 00:25:42.083 portid: 1 00:25:42.083 trsvcid: 4420 00:25:42.083 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:42.083 traddr: 10.0.0.1 00:25:42.083 eflags: none 00:25:42.083 sectype: none 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.083 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.084 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.084 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.084 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.084 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 nvme0n1 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.343 10:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 nvme0n1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.603 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.862 nvme0n1 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.862 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.863 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 nvme0n1 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.122 nvme0n1 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.122 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 nvme0n1 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.382 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 10:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 nvme0n1 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:43.900 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.901 nvme0n1 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.901 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.160 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.161 nvme0n1 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.161 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.420 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.421 nvme0n1 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.421 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:44.680 10:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.680 nvme0n1 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.680 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.681 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.940 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.199 nvme0n1 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.199 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 nvme0n1 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.459 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.460 10:04:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.719 nvme0n1 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.719 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.977 nvme0n1 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.977 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.235 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.236 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 nvme0n1 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.495 10:04:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.754 nvme0n1 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.754 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.012 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.271 nvme0n1 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.271 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 10:04:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.838 nvme0n1 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.838 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.839 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.097 nvme0n1 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.097 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.356 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.357 10:04:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.615 nvme0n1 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.615 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.616 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.182 nvme0n1 00:25:49.182 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.182 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.182 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.182 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.182 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.440 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.441 10:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.008 nvme0n1 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.008 10:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.576 nvme0n1 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.576 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.143 nvme0n1 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.143 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.402 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.403 10:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 nvme0n1 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 nvme0n1 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.970 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:52.229 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.230 nvme0n1 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.230 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.489 nvme0n1 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.489 10:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.489 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.490 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.749 nvme0n1 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.749 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.009 nvme0n1 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.009 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.268 nvme0n1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.268 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.527 nvme0n1 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.527 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.528 10:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.786 nvme0n1 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.786 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.787 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.055 nvme0n1 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.055 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.314 nvme0n1 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:54.314 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.315 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 nvme0n1 00:25:54.573 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.573 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.573 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 10:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.573 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.574 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 nvme0n1 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:54.832 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.833 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.091 nvme0n1 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.091 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.348 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.607 nvme0n1 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.607 10:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.607 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.867 nvme0n1 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.867 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.439 nvme0n1 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.439 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.440 10:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.699 nvme0n1 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.699 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.700 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.700 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.700 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.267 nvme0n1 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.267 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.268 10:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.527 nvme0n1 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.527 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.786 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.049 nvme0n1 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.049 10:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.668 nvme0n1 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.668 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.968 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.968 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.968 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.968 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.968 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.969 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.536 nvme0n1 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.536 10:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.105 nvme0n1 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.105 10:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.674 nvme0n1 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.674 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.242 nvme0n1 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:01.242 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.243 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.502 nvme0n1 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.502 10:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:01.502 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.503 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.762 nvme0n1 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:01.762 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.763 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 nvme0n1 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.022 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.281 nvme0n1 00:26:02.281 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.281 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.281 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.281 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.282 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.541 nvme0n1 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.541 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.542 10:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 nvme0n1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.802 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.061 nvme0n1 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.061 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.062 nvme0n1 00:26:03.062 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 nvme0n1 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.580 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.581 10:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.581 nvme0n1 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.581 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.840 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.099 nvme0n1 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.099 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.100 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.359 nvme0n1 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.359 10:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 nvme0n1 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.619 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.878 nvme0n1 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.878 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.137 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.396 nvme0n1 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.397 10:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.656 nvme0n1 00:26:05.656 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.656 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.656 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.656 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.656 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.914 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.173 nvme0n1 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:06.173 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.174 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.433 10:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.692 nvme0n1 00:26:06.692 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.693 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 nvme0n1 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.262 10:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 nvme0n1 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.521 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJiMDVkNjY5YTI1MzgxOWViYmRkMGZlMWJhZTMxYzNr+97Y: 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTk4ZWI4NWM4YmFmNTQyOThmMGFlNWQyZTdkY2FlNjc1MGRhZThiMWFlNTY0ZDdhOTI1YzVlMDFmY2E5YTA5Nt7dM/0=: 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.780 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.349 nvme0n1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.349 10:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.918 nvme0n1 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.918 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.487 nvme0n1 00:26:09.487 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.487 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.487 10:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDBjZmNjNTBkN2Q4MzM5YjUxNDg0ZDRjOTc5NTU3YzljMGM2MzkyNzllODZlMTMxF9ifCg==: 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: ]] 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY2Y2MzZGM4Njk1NjFjNDQ1NWU3YWU1ZDk0OTdjMjIcnNzn: 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.487 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.746 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.315 nvme0n1 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE1MjMyYzRkMGY3MzcwY2NjMjllNjNjYjgyMDRlNGEwMzU4MjliMDRhYzQ1NWNkYTg5OTM5MjY1MDgwNDMwNGyVqTQ=: 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.315 10:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 nvme0n1 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 request: 00:26:10.883 { 00:26:10.883 "name": "nvme0", 00:26:10.883 "trtype": "tcp", 00:26:10.883 "traddr": "10.0.0.1", 00:26:10.883 "adrfam": "ipv4", 00:26:10.883 "trsvcid": "4420", 00:26:10.883 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:10.883 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:10.883 "prchk_reftag": false, 00:26:10.883 "prchk_guard": false, 00:26:10.883 "hdgst": false, 00:26:10.883 "ddgst": false, 00:26:10.883 "allow_unrecognized_csi": false, 00:26:10.883 "method": "bdev_nvme_attach_controller", 00:26:10.883 "req_id": 1 00:26:10.883 } 00:26:10.883 Got JSON-RPC error response 00:26:10.883 response: 00:26:10.883 { 00:26:10.883 "code": -5, 00:26:10.883 "message": "Input/output error" 00:26:10.883 } 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.883 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.143 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.143 request: 00:26:11.143 { 00:26:11.143 "name": "nvme0", 00:26:11.143 "trtype": "tcp", 00:26:11.143 "traddr": "10.0.0.1", 00:26:11.143 "adrfam": "ipv4", 00:26:11.143 "trsvcid": "4420", 00:26:11.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:11.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:11.144 "prchk_reftag": false, 00:26:11.144 "prchk_guard": false, 00:26:11.144 "hdgst": false, 00:26:11.144 "ddgst": false, 00:26:11.144 "dhchap_key": "key2", 00:26:11.144 "allow_unrecognized_csi": false, 00:26:11.144 "method": "bdev_nvme_attach_controller", 00:26:11.144 "req_id": 1 00:26:11.144 } 00:26:11.144 Got JSON-RPC error response 00:26:11.144 response: 00:26:11.144 { 00:26:11.144 "code": -5, 00:26:11.144 "message": "Input/output error" 00:26:11.144 } 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.144 request: 00:26:11.144 { 00:26:11.144 "name": "nvme0", 00:26:11.144 "trtype": "tcp", 00:26:11.144 "traddr": "10.0.0.1", 00:26:11.144 "adrfam": "ipv4", 00:26:11.144 "trsvcid": "4420", 00:26:11.144 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:11.144 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:11.144 "prchk_reftag": false, 00:26:11.144 "prchk_guard": false, 00:26:11.144 "hdgst": false, 00:26:11.144 "ddgst": false, 00:26:11.144 "dhchap_key": "key1", 00:26:11.144 "dhchap_ctrlr_key": "ckey2", 00:26:11.144 "allow_unrecognized_csi": false, 00:26:11.144 "method": "bdev_nvme_attach_controller", 00:26:11.144 "req_id": 1 00:26:11.144 } 00:26:11.144 Got JSON-RPC error response 00:26:11.144 response: 00:26:11.144 { 00:26:11.144 "code": -5, 00:26:11.144 "message": "Input/output error" 00:26:11.144 } 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.144 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.404 nvme0n1 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.404 10:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.663 request: 00:26:11.663 { 00:26:11.663 "name": "nvme0", 00:26:11.663 "dhchap_key": "key1", 00:26:11.663 "dhchap_ctrlr_key": "ckey2", 00:26:11.663 "method": "bdev_nvme_set_keys", 00:26:11.663 "req_id": 1 00:26:11.663 } 00:26:11.663 Got JSON-RPC error response 00:26:11.663 response: 00:26:11.663 { 00:26:11.663 "code": -13, 00:26:11.663 "message": "Permission denied" 00:26:11.663 } 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:11.663 10:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:12.600 10:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.977 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzViYjkxOTkyNjk2N2QyOTZjMTkyN2ZkZTkwMzdiYzgwYWM0MTA2ZTQ2MGM1MmYz0zcaNg==: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk5ODEyODcxODY1YzYyZDI5MWY1ZTIwMGY3NGQ2NGI1MmQ5NmM0ZDNhMzI1NzAwQZkBxw==: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 nvme0n1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTRiNWU5YTg3ZTRjMzlkMGZmMmIwZmUwMDMxMjBlOTTHwLrT: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJkMTc4NGFmMGJiMmRiZjc1ZTJhMzdjNGM3NTYxYzLapq9y: 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 request: 00:26:13.978 { 00:26:13.978 "name": "nvme0", 00:26:13.978 "dhchap_key": "key2", 00:26:13.978 "dhchap_ctrlr_key": "ckey1", 00:26:13.978 "method": "bdev_nvme_set_keys", 00:26:13.978 "req_id": 1 00:26:13.978 } 00:26:13.978 Got JSON-RPC error response 00:26:13.978 response: 00:26:13.978 { 00:26:13.978 "code": -13, 00:26:13.978 "message": "Permission denied" 00:26:13.978 } 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:13.978 10:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:14.913 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:14.914 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:14.914 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:14.914 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.914 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.172 rmmod nvme_tcp 00:26:15.172 rmmod nvme_fabrics 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2786647 ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2786647 ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2786647' 00:26:15.172 killing process with pid 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2786647 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:15.172 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:15.173 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:15.173 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:15.173 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.430 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.430 10:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:17.332 10:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:20.625 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:20.625 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:22.002 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:22.002 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3LI /tmp/spdk.key-null.XBI /tmp/spdk.key-sha256.Htv /tmp/spdk.key-sha384.tkN /tmp/spdk.key-sha512.yQi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:22.002 10:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:24.533 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:24.533 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:24.533 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:24.792 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:24.792 00:26:24.792 real 0m54.498s 00:26:24.792 user 0m48.734s 00:26:24.792 sys 0m12.501s 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.792 ************************************ 00:26:24.792 END TEST nvmf_auth_host 00:26:24.792 ************************************ 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.792 ************************************ 00:26:24.792 START TEST nvmf_digest 00:26:24.792 ************************************ 00:26:24.792 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:25.052 * Looking for test storage... 00:26:25.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.052 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.053 --rc genhtml_branch_coverage=1 00:26:25.053 --rc genhtml_function_coverage=1 00:26:25.053 --rc genhtml_legend=1 00:26:25.053 --rc geninfo_all_blocks=1 00:26:25.053 --rc geninfo_unexecuted_blocks=1 00:26:25.053 00:26:25.053 ' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.053 --rc genhtml_branch_coverage=1 00:26:25.053 --rc genhtml_function_coverage=1 00:26:25.053 --rc genhtml_legend=1 00:26:25.053 --rc geninfo_all_blocks=1 00:26:25.053 --rc geninfo_unexecuted_blocks=1 00:26:25.053 00:26:25.053 ' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.053 --rc genhtml_branch_coverage=1 00:26:25.053 --rc genhtml_function_coverage=1 00:26:25.053 --rc genhtml_legend=1 00:26:25.053 --rc geninfo_all_blocks=1 00:26:25.053 --rc geninfo_unexecuted_blocks=1 00:26:25.053 00:26:25.053 ' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:25.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.053 --rc genhtml_branch_coverage=1 00:26:25.053 --rc genhtml_function_coverage=1 00:26:25.053 --rc genhtml_legend=1 00:26:25.053 --rc geninfo_all_blocks=1 00:26:25.053 --rc geninfo_unexecuted_blocks=1 00:26:25.053 00:26:25.053 ' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.053 10:04:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.673 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:31.674 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:31.674 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:31.674 Found net devices under 0000:86:00.0: cvl_0_0 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:31.674 Found net devices under 0000:86:00.1: cvl_0_1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:26:31.674 00:26:31.674 --- 10.0.0.2 ping statistics --- 00:26:31.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.674 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:26:31.674 00:26:31.674 --- 10.0.0.1 ping statistics --- 00:26:31.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.674 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.674 ************************************ 00:26:31.674 START TEST nvmf_digest_clean 00:26:31.674 ************************************ 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.674 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2800409 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2800409 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2800409 ']' 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.675 10:05:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.675 [2024-11-20 10:05:04.560051] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:31.675 [2024-11-20 10:05:04.560091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.675 [2024-11-20 10:05:04.637594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.675 [2024-11-20 10:05:04.680348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.675 [2024-11-20 10:05:04.680380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.675 [2024-11-20 10:05:04.680388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.675 [2024-11-20 10:05:04.680394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.675 [2024-11-20 10:05:04.680399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.675 [2024-11-20 10:05:04.680956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.934 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.934 null0 00:26:32.193 [2024-11-20 10:05:05.514722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.193 [2024-11-20 10:05:05.538934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2800489 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2800489 /var/tmp/bperf.sock 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2800489 ']' 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.193 [2024-11-20 10:05:05.590438] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:32.193 [2024-11-20 10:05:05.590479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800489 ] 00:26:32.193 [2024-11-20 10:05:05.664023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.193 [2024-11-20 10:05:05.706296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:32.193 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:32.451 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.451 10:05:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.710 nvme0n1 00:26:32.710 10:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:32.710 10:05:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.969 Running I/O for 2 seconds... 00:26:34.842 25684.00 IOPS, 100.33 MiB/s [2024-11-20T09:05:08.424Z] 25491.00 IOPS, 99.57 MiB/s 00:26:34.842 Latency(us) 00:26:34.842 [2024-11-20T09:05:08.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.842 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:34.842 nvme0n1 : 2.01 25505.34 99.63 0.00 0.00 5013.53 2309.36 16477.62 00:26:34.842 [2024-11-20T09:05:08.424Z] =================================================================================================================== 00:26:34.842 [2024-11-20T09:05:08.424Z] Total : 25505.34 99.63 0.00 0.00 5013.53 2309.36 16477.62 00:26:34.842 { 00:26:34.842 "results": [ 00:26:34.842 { 00:26:34.842 "job": "nvme0n1", 00:26:34.842 "core_mask": "0x2", 00:26:34.842 "workload": "randread", 00:26:34.842 "status": "finished", 00:26:34.842 "queue_depth": 128, 00:26:34.842 "io_size": 4096, 00:26:34.842 "runtime": 2.006364, 00:26:34.842 "iops": 25505.34200175043, 00:26:34.842 "mibps": 99.63024219433761, 00:26:34.842 "io_failed": 0, 00:26:34.842 "io_timeout": 0, 00:26:34.842 "avg_latency_us": 5013.528576769931, 00:26:34.842 "min_latency_us": 2309.3638095238093, 00:26:34.842 "max_latency_us": 16477.62285714286 00:26:34.842 } 00:26:34.842 ], 00:26:34.842 "core_count": 1 00:26:34.842 } 00:26:34.842 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:34.843 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:34.843 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:34.843 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:34.843 | select(.opcode=="crc32c") 00:26:34.843 | "\(.module_name) \(.executed)"' 00:26:34.843 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2800489 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2800489 ']' 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2800489 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800489 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800489' 00:26:35.102 killing process with pid 2800489 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2800489 00:26:35.102 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.102 00:26:35.102 Latency(us) 00:26:35.102 [2024-11-20T09:05:08.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.102 [2024-11-20T09:05:08.684Z] =================================================================================================================== 00:26:35.102 [2024-11-20T09:05:08.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.102 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2800489 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2801127 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2801127 /var/tmp/bperf.sock 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2801127 ']' 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.361 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.361 [2024-11-20 10:05:08.849733] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:35.361 [2024-11-20 10:05:08.849778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801127 ] 00:26:35.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.361 Zero copy mechanism will not be used. 00:26:35.361 [2024-11-20 10:05:08.923729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.620 [2024-11-20 10:05:08.966216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.620 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.620 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:35.620 10:05:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:35.620 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:35.620 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:35.879 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.879 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.137 nvme0n1 00:26:36.137 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:36.137 10:05:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.137 Zero copy mechanism will not be used. 00:26:36.137 Running I/O for 2 seconds... 00:26:38.450 5631.00 IOPS, 703.88 MiB/s [2024-11-20T09:05:12.032Z] 5899.50 IOPS, 737.44 MiB/s 00:26:38.450 Latency(us) 00:26:38.450 [2024-11-20T09:05:12.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.450 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:38.450 nvme0n1 : 2.00 5901.55 737.69 0.00 0.00 2708.74 592.94 11983.73 00:26:38.450 [2024-11-20T09:05:12.032Z] =================================================================================================================== 00:26:38.450 [2024-11-20T09:05:12.032Z] Total : 5901.55 737.69 0.00 0.00 2708.74 592.94 11983.73 00:26:38.450 { 00:26:38.450 "results": [ 00:26:38.450 { 00:26:38.450 "job": "nvme0n1", 00:26:38.450 "core_mask": "0x2", 00:26:38.450 "workload": "randread", 00:26:38.450 "status": "finished", 00:26:38.450 "queue_depth": 16, 00:26:38.450 "io_size": 131072, 00:26:38.450 "runtime": 2.002015, 00:26:38.450 "iops": 5901.5541841594595, 00:26:38.450 "mibps": 737.6942730199324, 00:26:38.450 "io_failed": 0, 00:26:38.450 "io_timeout": 0, 00:26:38.450 "avg_latency_us": 2708.7361496080443, 00:26:38.450 "min_latency_us": 592.9447619047619, 00:26:38.450 "max_latency_us": 11983.725714285714 00:26:38.450 } 00:26:38.450 ], 00:26:38.450 "core_count": 1 00:26:38.450 } 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:38.450 | select(.opcode=="crc32c") 00:26:38.450 | "\(.module_name) \(.executed)"' 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2801127 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2801127 ']' 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2801127 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801127 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801127' 00:26:38.450 killing process with pid 2801127 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2801127 00:26:38.450 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.450 00:26:38.450 Latency(us) 00:26:38.450 [2024-11-20T09:05:12.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.450 [2024-11-20T09:05:12.032Z] =================================================================================================================== 00:26:38.450 [2024-11-20T09:05:12.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.450 10:05:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2801127 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2801604 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2801604 /var/tmp/bperf.sock 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2801604 ']' 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:38.710 [2024-11-20 10:05:12.089455] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:38.710 [2024-11-20 10:05:12.089506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801604 ] 00:26:38.710 [2024-11-20 10:05:12.163872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.710 [2024-11-20 10:05:12.200864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:38.710 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:38.969 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.969 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.536 nvme0n1 00:26:39.536 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:39.536 10:05:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.536 Running I/O for 2 seconds... 00:26:41.848 28396.00 IOPS, 110.92 MiB/s [2024-11-20T09:05:15.430Z] 28639.50 IOPS, 111.87 MiB/s 00:26:41.848 Latency(us) 00:26:41.848 [2024-11-20T09:05:15.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.848 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:41.848 nvme0n1 : 2.00 28640.88 111.88 0.00 0.00 4463.21 1755.43 15978.30 00:26:41.848 [2024-11-20T09:05:15.430Z] =================================================================================================================== 00:26:41.848 [2024-11-20T09:05:15.430Z] Total : 28640.88 111.88 0.00 0.00 4463.21 1755.43 15978.30 00:26:41.848 { 00:26:41.848 "results": [ 00:26:41.848 { 00:26:41.848 "job": "nvme0n1", 00:26:41.848 "core_mask": "0x2", 00:26:41.848 "workload": "randwrite", 00:26:41.848 "status": "finished", 00:26:41.848 "queue_depth": 128, 00:26:41.848 "io_size": 4096, 00:26:41.848 "runtime": 2.004373, 00:26:41.848 "iops": 28640.87672304506, 00:26:41.848 "mibps": 111.87842469939477, 00:26:41.848 "io_failed": 0, 00:26:41.848 "io_timeout": 0, 00:26:41.848 "avg_latency_us": 4463.2060991400585, 00:26:41.849 "min_latency_us": 1755.4285714285713, 00:26:41.849 "max_latency_us": 15978.300952380952 00:26:41.849 } 00:26:41.849 ], 00:26:41.849 "core_count": 1 00:26:41.849 } 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:41.849 | select(.opcode=="crc32c") 00:26:41.849 | "\(.module_name) \(.executed)"' 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2801604 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2801604 ']' 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2801604 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801604 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801604' 00:26:41.849 killing process with pid 2801604 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2801604 00:26:41.849 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.849 00:26:41.849 Latency(us) 00:26:41.849 [2024-11-20T09:05:15.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.849 [2024-11-20T09:05:15.431Z] =================================================================================================================== 00:26:41.849 [2024-11-20T09:05:15.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.849 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2801604 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2802082 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2802082 /var/tmp/bperf.sock 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2802082 ']' 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.108 [2024-11-20 10:05:15.512814] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:42.108 [2024-11-20 10:05:15.512862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802082 ] 00:26:42.108 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.108 Zero copy mechanism will not be used. 00:26:42.108 [2024-11-20 10:05:15.587050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.108 [2024-11-20 10:05:15.629148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:42.108 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:42.367 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.367 10:05:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.934 nvme0n1 00:26:42.934 10:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:42.934 10:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.934 Zero copy mechanism will not be used. 00:26:42.934 Running I/O for 2 seconds... 00:26:45.244 6069.00 IOPS, 758.62 MiB/s [2024-11-20T09:05:18.826Z] 6744.50 IOPS, 843.06 MiB/s 00:26:45.244 Latency(us) 00:26:45.244 [2024-11-20T09:05:18.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.244 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:45.244 nvme0n1 : 2.00 6742.27 842.78 0.00 0.00 2369.00 1677.41 12108.56 00:26:45.244 [2024-11-20T09:05:18.826Z] =================================================================================================================== 00:26:45.244 [2024-11-20T09:05:18.826Z] Total : 6742.27 842.78 0.00 0.00 2369.00 1677.41 12108.56 00:26:45.244 { 00:26:45.244 "results": [ 00:26:45.244 { 00:26:45.244 "job": "nvme0n1", 00:26:45.244 "core_mask": "0x2", 00:26:45.244 "workload": "randwrite", 00:26:45.244 "status": "finished", 00:26:45.244 "queue_depth": 16, 00:26:45.244 "io_size": 131072, 00:26:45.244 "runtime": 2.003481, 00:26:45.244 "iops": 6742.265087615006, 00:26:45.244 "mibps": 842.7831359518758, 00:26:45.244 "io_failed": 0, 00:26:45.244 "io_timeout": 0, 00:26:45.244 "avg_latency_us": 2368.995128389526, 00:26:45.244 "min_latency_us": 1677.4095238095238, 00:26:45.244 "max_latency_us": 12108.55619047619 00:26:45.244 } 00:26:45.244 ], 00:26:45.244 "core_count": 1 00:26:45.244 } 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:45.244 | select(.opcode=="crc32c") 00:26:45.244 | "\(.module_name) \(.executed)"' 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2802082 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2802082 ']' 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2802082 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802082 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802082' 00:26:45.244 killing process with pid 2802082 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2802082 00:26:45.244 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.244 00:26:45.244 Latency(us) 00:26:45.244 [2024-11-20T09:05:18.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.244 [2024-11-20T09:05:18.826Z] =================================================================================================================== 00:26:45.244 [2024-11-20T09:05:18.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.244 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2802082 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2800409 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2800409 ']' 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2800409 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800409 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800409' 00:26:45.503 killing process with pid 2800409 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2800409 00:26:45.503 10:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2800409 00:26:45.503 00:26:45.503 real 0m14.552s 00:26:45.503 user 0m27.345s 00:26:45.503 sys 0m4.559s 00:26:45.503 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.503 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.503 ************************************ 00:26:45.503 END TEST nvmf_digest_clean 00:26:45.503 ************************************ 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:45.762 ************************************ 00:26:45.762 START TEST nvmf_digest_error 00:26:45.762 ************************************ 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2802788 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2802788 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2802788 ']' 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.762 [2024-11-20 10:05:19.186718] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:45.762 [2024-11-20 10:05:19.186761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.762 [2024-11-20 10:05:19.257223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.762 [2024-11-20 10:05:19.298184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.762 [2024-11-20 10:05:19.298226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.762 [2024-11-20 10:05:19.298234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.762 [2024-11-20 10:05:19.298240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.762 [2024-11-20 10:05:19.298245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.762 [2024-11-20 10:05:19.298781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:45.762 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.021 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.021 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:46.021 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.021 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.021 [2024-11-20 10:05:19.379253] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.022 null0 00:26:46.022 [2024-11-20 10:05:19.474836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.022 [2024-11-20 10:05:19.499041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2802817 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2802817 /var/tmp/bperf.sock 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2802817 ']' 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.022 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.022 [2024-11-20 10:05:19.548647] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:46.022 [2024-11-20 10:05:19.548688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802817 ] 00:26:46.282 [2024-11-20 10:05:19.620919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.282 [2024-11-20 10:05:19.665672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.282 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.282 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:46.282 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.282 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.541 10:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.109 nvme0n1 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:47.109 10:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.109 Running I/O for 2 seconds... 00:26:47.109 [2024-11-20 10:05:20.551314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.551347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.551357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.563727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.563750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.563759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.576415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.576436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.576445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.588955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.588977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.588986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.597180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.597205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.597214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.609363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.609384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.621263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.109 [2024-11-20 10:05:20.621283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.109 [2024-11-20 10:05:20.621292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.109 [2024-11-20 10:05:20.633695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.110 [2024-11-20 10:05:20.633716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.110 [2024-11-20 10:05:20.633724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.110 [2024-11-20 10:05:20.646518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.110 [2024-11-20 10:05:20.646540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.110 [2024-11-20 10:05:20.646548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.110 [2024-11-20 10:05:20.654748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.110 [2024-11-20 10:05:20.654770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.110 [2024-11-20 10:05:20.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.110 [2024-11-20 10:05:20.666661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.110 [2024-11-20 10:05:20.666681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.110 [2024-11-20 10:05:20.666690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.110 [2024-11-20 10:05:20.677850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.110 [2024-11-20 10:05:20.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.110 [2024-11-20 10:05:20.677878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.369 [2024-11-20 10:05:20.690404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.369 [2024-11-20 10:05:20.690424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-11-20 10:05:20.690432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.369 [2024-11-20 10:05:20.698738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.369 [2024-11-20 10:05:20.698757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-11-20 10:05:20.698765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.369 [2024-11-20 10:05:20.709005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.369 [2024-11-20 10:05:20.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.369 [2024-11-20 10:05:20.709033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.369 [2024-11-20 10:05:20.718809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.369 [2024-11-20 10:05:20.718829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.727146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.727166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.727177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.739181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.739207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.739216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.747526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.747546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.747554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.758073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.758093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.769548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.769568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.769576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.777823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.777844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.777852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.787641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.787669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.796173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.796193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.796206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.805352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.805372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.816879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.816903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.816911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.826078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.826098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.836064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.836084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.836092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.848542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.848562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.848570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.859167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.859187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.859195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.868163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.868183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.868191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.880549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.880569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.880577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.890598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.890618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.890625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.899337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.899357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.899365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.908874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.908895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.908903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.919158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.919178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.919186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.927253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.927273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.927280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.370 [2024-11-20 10:05:20.939776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.370 [2024-11-20 10:05:20.939796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.370 [2024-11-20 10:05:20.939804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.948637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.948657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.948665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.958175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.958196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.958211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.967384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.967404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.967412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.977612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.977633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.977641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.985502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.985522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.985535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:20.995129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:20.995148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:20.995156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.004826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.004846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.004854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.014273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.014293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.014301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.023818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.023837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.023845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.032894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.032914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.032922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.041422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.041441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.041449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.051762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.051790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.060793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.060814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.060822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.069495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.069515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.081689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.081709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.081718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.090856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.090877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.090885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.100206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.100226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.100234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.109704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.109725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.109734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.119608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.119628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.119640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.127508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.127529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.127537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.138081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.138101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.630 [2024-11-20 10:05:21.138108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.630 [2024-11-20 10:05:21.147829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.630 [2024-11-20 10:05:21.147849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.147860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.156130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.167184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.167210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.167219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.175345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.175366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.175374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.185430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.185450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.185458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.194128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.194148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.194156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.631 [2024-11-20 10:05:21.204343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.631 [2024-11-20 10:05:21.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.631 [2024-11-20 10:05:21.204374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.213206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.213228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.230214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.230237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.230246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.240021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.240046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.240053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.248326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.248348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.248355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.258310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.258332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.258341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.266936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.266956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.266964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.276486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.276506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.276514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.285280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.285300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.285308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.890 [2024-11-20 10:05:21.294353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.890 [2024-11-20 10:05:21.294374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.890 [2024-11-20 10:05:21.294382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.303903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.303923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.303931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.313665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.313685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.323381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.323402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.323409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.333222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.333242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.333250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.342230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.342251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.342259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.351807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.351828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.351835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.361557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.361578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.361586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.371257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.371278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.371286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.381797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.381819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.381827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.390666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.390686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.390694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.401626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.401647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.401659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.410639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.410667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.421793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.421822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.431545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.431567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.431575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.442686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.442707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.442715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.451436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.451466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.891 [2024-11-20 10:05:21.462290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:47.891 [2024-11-20 10:05:21.462313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.891 [2024-11-20 10:05:21.462322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.150 [2024-11-20 10:05:21.471442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.150 [2024-11-20 10:05:21.471462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.150 [2024-11-20 10:05:21.471470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.150 [2024-11-20 10:05:21.479868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.150 [2024-11-20 10:05:21.479889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.150 [2024-11-20 10:05:21.479898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.150 [2024-11-20 10:05:21.490592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.150 [2024-11-20 10:05:21.490618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.150 [2024-11-20 10:05:21.490626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.150 [2024-11-20 10:05:21.500128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.150 [2024-11-20 10:05:21.500150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.150 [2024-11-20 10:05:21.500158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.508871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.508892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.508900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.520305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.520326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.520335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.528764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.528784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.528793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 25438.00 IOPS, 99.37 MiB/s [2024-11-20T09:05:21.733Z] [2024-11-20 10:05:21.541832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.541852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.541859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.553627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.553647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.553655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.564447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.564468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.564477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.573200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.573228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.573246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.583018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.583039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.583047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.592059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.592080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.592089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.601566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.601587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.611857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.611877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.611885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.620795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.620816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.620824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.629707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.629728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.629736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.639400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.639420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.639428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.648664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.648685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.648693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.658136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.658157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.658169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.669370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.669390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.669399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.681434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.681465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.681473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.690307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.690327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.690335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.702128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.702148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.702156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.713953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.713973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.713980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.151 [2024-11-20 10:05:21.722654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.151 [2024-11-20 10:05:21.722673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.151 [2024-11-20 10:05:21.722681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.734536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.734557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.734565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.746734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.746756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.746764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.755336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.755358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.755366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.767135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.767156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.767164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.775497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.775517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.775524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.788268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.788288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.788297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.800783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.800803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.811033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.811052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.811060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.820304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.411 [2024-11-20 10:05:21.820324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.411 [2024-11-20 10:05:21.820332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.411 [2024-11-20 10:05:21.829413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.829433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.829441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.838912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.838932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.838944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.849841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.849861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.858629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.858648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.858656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.867685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.867712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.877558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.877578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.877586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.890213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.890233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.890241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.900052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.900072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.900080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.907996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.908015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.908023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.918800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.918821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.918828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.930095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.930118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.930127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.939562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.939581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.939589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.948676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.948696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.948704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.957808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.957828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.957836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.967424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.967444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.967452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.412 [2024-11-20 10:05:21.978842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.412 [2024-11-20 10:05:21.978862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.412 [2024-11-20 10:05:21.978870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.672 [2024-11-20 10:05:21.989969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.672 [2024-11-20 10:05:21.989989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.672 [2024-11-20 10:05:21.989997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.672 [2024-11-20 10:05:21.999255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.672 [2024-11-20 10:05:21.999276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.672 [2024-11-20 10:05:21.999284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.672 [2024-11-20 10:05:22.011590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.672 [2024-11-20 10:05:22.011611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.672 [2024-11-20 10:05:22.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.672 [2024-11-20 10:05:22.023512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.023532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.023540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.036824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.036844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.036852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.046380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.046400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.046407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.054781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.054801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.054809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.066070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.066098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.075508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.075528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.075537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.084736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.084756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.084764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.093994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.104912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.104933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.104945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.113197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.113223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.113231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.124897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.124918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.124926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.137414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.149774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.149795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.149803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.158142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.158163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.158172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.168416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.168436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.168444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.181254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.181274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.181283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.189346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.189366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.189375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.201071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.201094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.201102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.212343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.212363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.212372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.221432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.221452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.221460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.233042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.233063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.233071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.673 [2024-11-20 10:05:22.244901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.673 [2024-11-20 10:05:22.244923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.673 [2024-11-20 10:05:22.244931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.253038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.253059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.253067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.266212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.266233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.266240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.274553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.274573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.274581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.284958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.284979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.284987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.296656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.296676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.296684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.309299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.309320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.309328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.320991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.321010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.321018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.329617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.329637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.329645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.342141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.342161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.342169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.354528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.354549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.354556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.365702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.365721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.365729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.374313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.374332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.374340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.385151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.385171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.385182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.396432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.396463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.396472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.407829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.407857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.416322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.416342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.416350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.428992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.429011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.429019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.441257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.441276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.933 [2024-11-20 10:05:22.441284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.933 [2024-11-20 10:05:22.453540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.933 [2024-11-20 10:05:22.453561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.453569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.934 [2024-11-20 10:05:22.465219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.934 [2024-11-20 10:05:22.465239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.465247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.934 [2024-11-20 10:05:22.473613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.934 [2024-11-20 10:05:22.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.473641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.934 [2024-11-20 10:05:22.486080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.934 [2024-11-20 10:05:22.486101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.486110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.934 [2024-11-20 10:05:22.498671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.934 [2024-11-20 10:05:22.498692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.498700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.934 [2024-11-20 10:05:22.509800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:48.934 [2024-11-20 10:05:22.509822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.934 [2024-11-20 10:05:22.509830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.192 [2024-11-20 10:05:22.519322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:49.192 [2024-11-20 10:05:22.519341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.192 [2024-11-20 10:05:22.519350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.192 [2024-11-20 10:05:22.529354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:49.192 [2024-11-20 10:05:22.529374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.192 [2024-11-20 10:05:22.529382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.192 [2024-11-20 10:05:22.538808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x972370) 00:26:49.192 [2024-11-20 10:05:22.538827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.192 [2024-11-20 10:05:22.538836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.192 24878.00 IOPS, 97.18 MiB/s 00:26:49.192 Latency(us) 00:26:49.192 [2024-11-20T09:05:22.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.192 nvme0n1 : 2.00 24898.85 97.26 0.00 0.00 5136.08 2715.06 17601.10 00:26:49.192 [2024-11-20T09:05:22.774Z] =================================================================================================================== 00:26:49.192 [2024-11-20T09:05:22.774Z] Total : 24898.85 97.26 0.00 0.00 5136.08 2715.06 17601.10 00:26:49.192 { 00:26:49.192 "results": [ 00:26:49.192 { 00:26:49.192 "job": "nvme0n1", 00:26:49.192 "core_mask": "0x2", 00:26:49.192 "workload": "randread", 00:26:49.192 "status": "finished", 00:26:49.192 "queue_depth": 128, 00:26:49.192 "io_size": 4096, 00:26:49.192 "runtime": 2.003466, 00:26:49.192 "iops": 24898.850292443196, 00:26:49.192 "mibps": 97.26113395485623, 00:26:49.192 "io_failed": 0, 00:26:49.192 "io_timeout": 0, 00:26:49.192 "avg_latency_us": 5136.080832407375, 00:26:49.192 "min_latency_us": 2715.062857142857, 00:26:49.192 "max_latency_us": 17601.097142857143 00:26:49.192 } 00:26:49.192 ], 00:26:49.192 "core_count": 1 00:26:49.192 } 00:26:49.192 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:49.192 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:49.192 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:49.192 | .driver_specific 00:26:49.192 | .nvme_error 00:26:49.192 | .status_code 00:26:49.192 | .command_transient_transport_error' 00:26:49.192 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2802817 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2802817 ']' 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2802817 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802817 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802817' 00:26:49.452 killing process with pid 2802817 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2802817 00:26:49.452 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.452 00:26:49.452 Latency(us) 00:26:49.452 [2024-11-20T09:05:23.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.452 [2024-11-20T09:05:23.034Z] =================================================================================================================== 00:26:49.452 [2024-11-20T09:05:23.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2802817 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2803454 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2803454 /var/tmp/bperf.sock 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2803454 ']' 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.452 10:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.712 [2024-11-20 10:05:23.037167] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:49.712 [2024-11-20 10:05:23.037218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803454 ] 00:26:49.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.712 Zero copy mechanism will not be used. 00:26:49.712 [2024-11-20 10:05:23.111217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.712 [2024-11-20 10:05:23.153389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.712 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.712 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:49.712 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:49.712 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.970 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.538 nvme0n1 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:50.538 10:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.538 Zero copy mechanism will not be used. 00:26:50.538 Running I/O for 2 seconds... 00:26:50.538 [2024-11-20 10:05:24.002697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.002732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.002743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.007929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.007954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.007967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.013110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.013133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.013142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.018295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.018318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.018326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.023538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.023559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.023567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.028702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.028724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.028732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.033905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.033927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.033935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.036713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.036734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.538 [2024-11-20 10:05:24.036742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.538 [2024-11-20 10:05:24.041900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.538 [2024-11-20 10:05:24.041921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.041929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.047125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.047146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.047155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.052353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.052378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.052387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.057566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.057588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.057596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.062712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.062733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.062741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.068028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.068057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.073272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.073293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.073301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.078530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.078552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.078559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.083868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.083891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.083899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.089092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.089115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.089124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.094449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.094471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.094479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.099593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.099615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.104793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.104815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.104823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.109964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.109995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.539 [2024-11-20 10:05:24.115197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.539 [2024-11-20 10:05:24.115227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.539 [2024-11-20 10:05:24.115235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.120406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.120429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.125584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.125608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.125617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.130754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.130777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.130785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.135937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.135959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.135966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.141136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.141158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.146338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.146360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.146369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.151556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.151577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.151584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.156724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.156746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.156754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.161935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.161956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.161964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.167080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.167100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.800 [2024-11-20 10:05:24.167108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.800 [2024-11-20 10:05:24.172269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.800 [2024-11-20 10:05:24.172290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.172298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.177604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.177625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.182771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.182793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.182800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.187930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.187952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.187959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.193167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.193187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.193195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.197927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.197948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.197956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.203040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.203061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.203069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.208096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.208117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.208124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.213275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.213295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.213303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.218382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.218403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.218411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.223086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.223108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.223116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.226501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.226521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.226533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.230922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.230944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.230953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.238431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.238454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.238462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.245509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.245531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.245539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.252682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.252705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.252713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.259531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.259556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.259565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.266902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.266925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.274862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.274885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.274893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.282446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.282469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.282478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.289983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.290011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.290020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.297663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.297691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.297700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.301811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.301834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.301843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.310285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.310307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.310316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.317077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.317109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.323038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.323062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.323074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.328317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.328341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.801 [2024-11-20 10:05:24.328349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.801 [2024-11-20 10:05:24.333560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.801 [2024-11-20 10:05:24.333584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.338724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.338746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.338753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.343974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.343996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.344004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.349241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.349262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.349270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.354413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.354442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.359588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.359608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.359616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.364719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.364740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.364748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.370028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.370049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.370057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.802 [2024-11-20 10:05:24.375265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:50.802 [2024-11-20 10:05:24.375286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.802 [2024-11-20 10:05:24.375294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.380435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.380457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.380465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.385771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.385792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.385804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.391060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.391080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.396515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.396536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.396544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.402376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.402397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.402405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.407553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.407574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.412753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.412774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.412782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.417864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.417885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.417893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.423093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.423113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.423121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.428611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.428632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.428640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.433275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.433296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.433304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.438507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.438537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.443883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.443904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.443912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.449167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.449190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.449198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.454484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.454507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.459525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.063 [2024-11-20 10:05:24.459556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.063 [2024-11-20 10:05:24.464695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.063 [2024-11-20 10:05:24.464717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.464726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.469860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.469890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.475087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.475108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.475120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.480717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.480740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.480748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.486079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.486101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.486110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.491583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.491605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.491613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.496864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.496886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.496893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.502194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.502223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.502231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.507323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.507346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.507354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.512961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.512984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.512993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.518309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.518331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.518339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.523077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.523111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.528217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.528239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.528247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.533413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.533434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.533442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.538587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.538609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.538617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.543777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.543799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.543806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.548932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.548958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.548966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.554109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.554130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.554138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.559318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.559339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.564479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.564501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.569674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.569695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.569703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.574895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.574917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.574925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.580137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.580158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.580166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.585283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.590517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.590538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.595711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.595741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.600915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.064 [2024-11-20 10:05:24.600936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.064 [2024-11-20 10:05:24.600945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.064 [2024-11-20 10:05:24.606136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.606157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.606165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.611369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.611391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.611403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.616592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.616612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.616619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.621817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.621838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.621846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.627007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.627029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.627036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.632158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.632187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.065 [2024-11-20 10:05:24.637343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.065 [2024-11-20 10:05:24.637365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.065 [2024-11-20 10:05:24.637373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.642563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.642584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.647785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.647806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.647814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.653168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.653190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.653198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.657915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.657937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.657945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.661408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.661428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.661435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.666701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.666722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.666729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.671843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.671863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.671872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.677051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.677071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.677078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.682282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.682302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.325 [2024-11-20 10:05:24.682310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.325 [2024-11-20 10:05:24.687482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.325 [2024-11-20 10:05:24.687503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.687510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.692673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.692693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.692701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.697872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.697892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.697903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.703008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.703028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.703036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.708176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.708195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.708209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.713371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.713391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.713398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.718516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.718535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.723687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.723707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.723715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.728867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.728887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.728895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.733995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.734014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.734022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.739208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.739228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.739235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.744514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.744538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.744546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.749712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.749731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.749739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.754897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.754918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.754926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.760074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.760093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.765348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.765369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.770599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.770619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.770627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.775845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.775872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.781084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.781104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.781112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.786312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.786341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.791487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.791507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.791515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.796742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.796761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.796769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.801977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.802003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.807874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.807902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.813599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.813620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.813629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.820129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.326 [2024-11-20 10:05:24.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.326 [2024-11-20 10:05:24.820158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.326 [2024-11-20 10:05:24.827475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.827497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.827506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.834080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.834102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.834110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.841660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.841694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.848712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.848734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.848743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.856102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.856122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.856131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.861046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.861066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.861075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.866208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.866237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.871433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.871453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.871462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.876480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.876501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.876509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.881632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.881653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.881661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.886775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.886795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.886803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.891990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.892016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.892024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.897182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.897208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.897217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.327 [2024-11-20 10:05:24.902687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.327 [2024-11-20 10:05:24.902708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.327 [2024-11-20 10:05:24.902716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.908168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.908190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.908198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.914006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.914027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.914034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.919441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.919462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.919470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.924836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.924856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.924864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.930357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.930378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.930385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.936397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.936418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.936427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.941693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.941714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.941722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.947216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.947236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.947244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.952684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.952705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.952713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.958265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.958285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.958293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.963789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.963810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.963817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.586 [2024-11-20 10:05:24.969688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.586 [2024-11-20 10:05:24.969708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.586 [2024-11-20 10:05:24.969716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:24.975260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:24.975280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:24.975288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:24.980602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:24.980623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:24.980631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:24.985838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:24.985858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:24.985869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:24.991158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:24.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:24.991186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:24.996406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:24.996426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:24.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 5720.00 IOPS, 715.00 MiB/s [2024-11-20T09:05:25.169Z] [2024-11-20 10:05:25.002886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.002905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.002913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.008178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.008198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.008215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.013711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.013732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.013740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.019357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.019378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.019387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.024841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.024862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.030430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.030451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.030459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.036044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.036064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.036073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.041536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.041557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.041565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.046808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.046829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.046837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.052014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.052035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.052043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.057293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.057314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.057323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.062330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.062350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.062358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.067760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.067780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.067788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.073319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.073339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.073347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.078830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.078851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.078863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.084340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.084369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.089931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.089952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.089960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.095411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.095432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.095440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.100923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.100944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.100952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.106309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.106329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.106337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.111660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.111681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.117108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.117129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.117137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.122563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.587 [2024-11-20 10:05:25.122583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.587 [2024-11-20 10:05:25.122591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.587 [2024-11-20 10:05:25.127860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.127884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.127891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.133441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.133462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.133470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.138920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.138948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.144772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.144795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.144803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.150480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.150502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.150510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.155782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.155802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.155810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.588 [2024-11-20 10:05:25.161032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.588 [2024-11-20 10:05:25.161053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.588 [2024-11-20 10:05:25.161062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.166011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.166032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.166040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.171396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.171418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.171430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.176653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.176674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.176682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.181914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.181942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.187171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.187192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.187200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.192293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.192322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.197726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.197747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.846 [2024-11-20 10:05:25.197755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.846 [2024-11-20 10:05:25.203124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.846 [2024-11-20 10:05:25.203145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.203153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.208680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.208701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.208710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.214584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.214606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.214614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.220076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.220100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.220109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.225594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.225614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.225622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.231050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.231071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.231079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.236475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.236496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.236503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.241687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.241709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.241717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.246520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.246542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.246550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.250959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.250980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.250988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.253951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.253972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.259030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.259051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.259059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.264127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.264148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.264156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.269387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.269408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.269417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.274607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.274628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.274637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.280074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.280095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.280103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.285442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.285463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.285471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.290773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.290793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.290801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.296154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.296175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.296184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.301459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.301479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.301488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.306889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.306912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.306925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.312252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.312273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.312281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.317736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.317758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.317766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.323450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.323473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.323482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.329442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.329464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.335133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.335154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.335162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.340557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.340577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.340585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.345972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.345992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.346000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.351191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.351216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.351225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.356280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.356305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.361401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.361421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.361429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.366664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.366686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.366694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.371590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.371621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.376421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.376443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.376452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.381659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.381681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.381689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.386854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.386875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.386883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.392257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.392277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.392286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.397582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.397604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.397617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.402714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.402735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.402744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.407871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.407891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.407899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.413121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.413141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.413148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.418278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.418299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.418307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.847 [2024-11-20 10:05:25.423757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:51.847 [2024-11-20 10:05:25.423779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.847 [2024-11-20 10:05:25.423788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.429326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.429348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.429356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.435052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.435074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.435082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.440429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.440451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.440459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.445804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.445831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.445839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.451211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.451232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.451240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.456534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.456555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.456563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.461759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.461781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.461789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.467071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.467092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.467099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.472135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.472156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.472165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.478244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.478265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.478273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.483979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.484000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.484008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.489406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.489426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.489434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.494932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.494954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.494962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.500379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.500400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.500407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.505811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.505832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.511244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.511265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.511273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.516687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.516710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.516718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.522219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.522242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.522250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.527759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.527789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.533261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.538801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.538823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.538835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.544263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.544286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.544294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.549923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.549945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.549953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.555432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.555455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.555463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.560803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.560825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.566096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.566118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.566126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.571416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.571438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.571446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.576803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.576825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.576833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.582222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.582245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.582253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.587522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.587550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.587558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.592884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.592906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.592915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.598465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.598486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.598494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.603872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.603893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.609200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.609228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.614021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.614042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.614050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.619267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.619289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.619297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.624391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.624412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.624420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.629503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.629525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.634684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.634706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.634714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.639819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.639841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.639849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.645007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.645028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.645036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.650329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.650351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.650359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.655873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.655895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.660574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.663666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.663686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.663694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.669082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.669103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.669111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.674559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.674581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.674593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.107 [2024-11-20 10:05:25.680098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.107 [2024-11-20 10:05:25.680120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.107 [2024-11-20 10:05:25.680127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.685286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.685317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.690451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.690472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.690480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.695774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.695804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.700681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.700702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.700710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.706147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.706169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.706178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.711076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.711097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.716303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.716325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.716333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.721211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.721251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.721260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.726433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.726465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.731852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.731874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.737222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.737243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.737251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.743431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.743453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.743461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.750229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.750252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.750260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.757721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.757752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.764080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.764104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.764113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.770464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.770488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.770500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.776792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.776815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.776824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.782852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.782874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.782883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.788847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.788869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.788877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.791977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.791998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.792007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.797656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.797678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.797686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.803607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.803637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.809115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.809137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.369 [2024-11-20 10:05:25.809144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.369 [2024-11-20 10:05:25.814472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.369 [2024-11-20 10:05:25.814495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.814503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.819770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.819796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.819804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.825429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.825451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.831139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.831161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.831170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.836762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.836783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.836791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.842363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.842385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.842393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.847533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.847557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.847565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.852979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.853008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.858330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.858351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.858360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.863634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.863656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.863664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.869111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.869133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.869141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.874569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.874592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.874599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.880139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.880160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.880168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.885961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.885982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.885990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.891730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.891751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.897321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.897342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.897351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.902971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.902993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.908377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.908399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.908408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.913768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.913790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.913801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.919153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.919174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.924567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.924590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.924598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.931135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.931158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.931166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.937080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.937102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.937109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.370 [2024-11-20 10:05:25.944779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.370 [2024-11-20 10:05:25.944802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.370 [2024-11-20 10:05:25.944811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.952454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.629 [2024-11-20 10:05:25.952477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.629 [2024-11-20 10:05:25.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.960820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.629 [2024-11-20 10:05:25.960843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.629 [2024-11-20 10:05:25.960851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.968229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.629 [2024-11-20 10:05:25.968253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.629 [2024-11-20 10:05:25.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.976786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.629 [2024-11-20 10:05:25.976813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.629 [2024-11-20 10:05:25.976821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.984815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.629 [2024-11-20 10:05:25.984838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.629 [2024-11-20 10:05:25.984846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:52.629 [2024-11-20 10:05:25.992871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.630 [2024-11-20 10:05:25.992893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.630 [2024-11-20 10:05:25.992902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:52.630 [2024-11-20 10:05:26.000729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc9580) 00:26:52.630 [2024-11-20 10:05:26.000752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.630 [2024-11-20 10:05:26.000760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:52.630 5676.00 IOPS, 709.50 MiB/s 00:26:52.630 Latency(us) 00:26:52.630 [2024-11-20T09:05:26.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:52.630 nvme0n1 : 2.00 5678.68 709.84 0.00 0.00 2814.72 643.66 8925.38 00:26:52.630 [2024-11-20T09:05:26.212Z] =================================================================================================================== 00:26:52.630 [2024-11-20T09:05:26.212Z] Total : 5678.68 709.84 0.00 0.00 2814.72 643.66 8925.38 00:26:52.630 { 00:26:52.630 "results": [ 00:26:52.630 { 00:26:52.630 "job": "nvme0n1", 00:26:52.630 "core_mask": "0x2", 00:26:52.630 "workload": "randread", 00:26:52.630 "status": "finished", 00:26:52.630 "queue_depth": 16, 00:26:52.630 "io_size": 131072, 00:26:52.630 "runtime": 2.001872, 00:26:52.630 "iops": 5678.684751072996, 00:26:52.630 "mibps": 709.8355938841245, 00:26:52.630 "io_failed": 0, 00:26:52.630 "io_timeout": 0, 00:26:52.630 "avg_latency_us": 2814.7178124057505, 00:26:52.630 "min_latency_us": 643.6571428571428, 00:26:52.630 "max_latency_us": 8925.379047619048 00:26:52.630 } 00:26:52.630 ], 00:26:52.630 "core_count": 1 00:26:52.630 } 00:26:52.630 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:52.630 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:52.630 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:52.630 | .driver_specific 00:26:52.630 | .nvme_error 00:26:52.630 | .status_code 00:26:52.630 | .command_transient_transport_error' 00:26:52.630 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2803454 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2803454 ']' 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2803454 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2803454 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2803454' 00:26:52.889 killing process with pid 2803454 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2803454 00:26:52.889 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.889 00:26:52.889 Latency(us) 00:26:52.889 [2024-11-20T09:05:26.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.889 [2024-11-20T09:05:26.471Z] =================================================================================================================== 00:26:52.889 [2024-11-20T09:05:26.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2803454 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2803983 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2803983 /var/tmp/bperf.sock 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2803983 ']' 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.889 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.149 [2024-11-20 10:05:26.470368] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:53.149 [2024-11-20 10:05:26.470413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803983 ] 00:26:53.149 [2024-11-20 10:05:26.546150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.149 [2024-11-20 10:05:26.587975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.149 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.149 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:53.149 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:53.149 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.408 10:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.977 nvme0n1 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:53.977 10:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.977 Running I/O for 2 seconds... 00:26:53.977 [2024-11-20 10:05:27.427632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e12d8 00:26:53.977 [2024-11-20 10:05:27.428609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.428637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.437874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e38d0 00:26:53.977 [2024-11-20 10:05:27.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.439146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.445730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fda78 00:26:53.977 [2024-11-20 10:05:27.446964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.446984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.453440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ed920 00:26:53.977 [2024-11-20 10:05:27.454094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.462816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e88f8 00:26:53.977 [2024-11-20 10:05:27.463591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.463610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.472231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee5c8 00:26:53.977 [2024-11-20 10:05:27.473148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.473166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.481791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ef270 00:26:53.977 [2024-11-20 10:05:27.482787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.482805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.491186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb480 00:26:53.977 [2024-11-20 10:05:27.492312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.500579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e88f8 00:26:53.977 [2024-11-20 10:05:27.501882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.501900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.508928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e0a68 00:26:53.977 [2024-11-20 10:05:27.509867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.509886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.517787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fda78 00:26:53.977 [2024-11-20 10:05:27.518742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.518761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.526781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc998 00:26:53.977 [2024-11-20 10:05:27.527724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.535758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eb328 00:26:53.977 [2024-11-20 10:05:27.536709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.536726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.544638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e6b70 00:26:53.977 [2024-11-20 10:05:27.545461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.545480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.977 [2024-11-20 10:05:27.553735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7100 00:26:53.977 [2024-11-20 10:05:27.554584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.977 [2024-11-20 10:05:27.554602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.562578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f3e60 00:26:54.237 [2024-11-20 10:05:27.563524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.563544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.571754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.572606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.572625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.581226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.582140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.582159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.590161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.591106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.591125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.599121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.600101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.608097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.609038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.609058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.617026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.617989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.618011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.625945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.626873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.626891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.634847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.635790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.635808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.643772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.644721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.644739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.652932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.653916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.653935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.237 [2024-11-20 10:05:27.661845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:54.237 [2024-11-20 10:05:27.662768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.237 [2024-11-20 10:05:27.662786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.671001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7970 00:26:54.238 [2024-11-20 10:05:27.671714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.671732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.679315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2948 00:26:54.238 [2024-11-20 10:05:27.680743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.680761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.688057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ef270 00:26:54.238 [2024-11-20 10:05:27.688762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.688781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.697114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eb328 00:26:54.238 [2024-11-20 10:05:27.697846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.697864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.706179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e6b70 00:26:54.238 [2024-11-20 10:05:27.706906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.706924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.715136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e7818 00:26:54.238 [2024-11-20 10:05:27.715832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.715850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.724050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e88f8 00:26:54.238 [2024-11-20 10:05:27.724741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.724760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.733027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e99d8 00:26:54.238 [2024-11-20 10:05:27.733753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.733772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.741997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166df118 00:26:54.238 [2024-11-20 10:05:27.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.742742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.751153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ed0b0 00:26:54.238 [2024-11-20 10:05:27.751865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.751883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.760132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2510 00:26:54.238 [2024-11-20 10:05:27.760821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.769111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1430 00:26:54.238 [2024-11-20 10:05:27.769830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.769849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.778090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f0350 00:26:54.238 [2024-11-20 10:05:27.778816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.778834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.787046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fe2e8 00:26:54.238 [2024-11-20 10:05:27.787744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.787761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.795966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fd208 00:26:54.238 [2024-11-20 10:05:27.796682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.796700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.804986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc128 00:26:54.238 [2024-11-20 10:05:27.805687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.805705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.238 [2024-11-20 10:05:27.814417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fbcf0 00:26:54.238 [2024-11-20 10:05:27.815215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.238 [2024-11-20 10:05:27.815234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.823611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec840 00:26:54.498 [2024-11-20 10:05:27.824477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.824495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.832576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166efae0 00:26:54.498 [2024-11-20 10:05:27.833401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.833420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.841615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eea00 00:26:54.498 [2024-11-20 10:05:27.842501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.842519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.850574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e6300 00:26:54.498 [2024-11-20 10:05:27.851414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.851438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.859598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebb98 00:26:54.498 [2024-11-20 10:05:27.860412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.860430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.868528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e8088 00:26:54.498 [2024-11-20 10:05:27.869335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.869354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.877469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e1f80 00:26:54.498 [2024-11-20 10:05:27.878312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.878331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.886430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3060 00:26:54.498 [2024-11-20 10:05:27.887238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.887256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.895354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3d08 00:26:54.498 [2024-11-20 10:05:27.896155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.896173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.904360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166dece0 00:26:54.498 [2024-11-20 10:05:27.905180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.905198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.913334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8e88 00:26:54.498 [2024-11-20 10:05:27.914174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.914192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.922305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ea680 00:26:54.498 [2024-11-20 10:05:27.923139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.923156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.931225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f46d0 00:26:54.498 [2024-11-20 10:05:27.932072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.932090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.940524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f57b0 00:26:54.498 [2024-11-20 10:05:27.941356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.941375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.949509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f9b30 00:26:54.498 [2024-11-20 10:05:27.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.950328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.958479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fac10 00:26:54.498 [2024-11-20 10:05:27.959320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.959338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.967506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f3a28 00:26:54.498 [2024-11-20 10:05:27.968354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.968372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.976459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec408 00:26:54.498 [2024-11-20 10:05:27.977346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.977364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.985460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eff18 00:26:54.498 [2024-11-20 10:05:27.986276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.986294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:27.994374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eb760 00:26:54.498 [2024-11-20 10:05:27.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:27.995240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:28.003805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ea680 00:26:54.498 [2024-11-20 10:05:28.004898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.498 [2024-11-20 10:05:28.004916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.498 [2024-11-20 10:05:28.014878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e4de8 00:26:54.499 [2024-11-20 10:05:28.016447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.016465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.021218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eee38 00:26:54.499 [2024-11-20 10:05:28.021903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.021921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.030329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166edd58 00:26:54.499 [2024-11-20 10:05:28.031051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.031069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.038705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc998 00:26:54.499 [2024-11-20 10:05:28.039439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.039458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.048092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eaab8 00:26:54.499 [2024-11-20 10:05:28.048957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.048975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.057434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166feb58 00:26:54.499 [2024-11-20 10:05:28.058369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.058387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.499 [2024-11-20 10:05:28.066792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fa7d8 00:26:54.499 [2024-11-20 10:05:28.067887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.499 [2024-11-20 10:05:28.067905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.076095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ddc00 00:26:54.758 [2024-11-20 10:05:28.077176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.077194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.085493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e49b0 00:26:54.758 [2024-11-20 10:05:28.086597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.093869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5658 00:26:54.758 [2024-11-20 10:05:28.094849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.094867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.102197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f35f0 00:26:54.758 [2024-11-20 10:05:28.103072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.103090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.111458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1ca0 00:26:54.758 [2024-11-20 10:05:28.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.112287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.119805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1868 00:26:54.758 [2024-11-20 10:05:28.120424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.758 [2024-11-20 10:05:28.120442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.758 [2024-11-20 10:05:28.128407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8a50 00:26:54.759 [2024-11-20 10:05:28.129105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.129123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.137518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fda78 00:26:54.759 [2024-11-20 10:05:28.138265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.138284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.147750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e6738 00:26:54.759 [2024-11-20 10:05:28.148820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.148838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.156732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f0bc0 00:26:54.759 [2024-11-20 10:05:28.157826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.157844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.166118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7970 00:26:54.759 [2024-11-20 10:05:28.167321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.167342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.175168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8e88 00:26:54.759 [2024-11-20 10:05:28.176273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.183699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f20d8 00:26:54.759 [2024-11-20 10:05:28.184819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.184836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.192715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebfd0 00:26:54.759 [2024-11-20 10:05:28.193467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.201014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6890 00:26:54.759 [2024-11-20 10:05:28.201867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.201885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.210134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee190 00:26:54.759 [2024-11-20 10:05:28.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.210992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.220668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e0ea0 00:26:54.759 [2024-11-20 10:05:28.221884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.221902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.229175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2d80 00:26:54.759 [2024-11-20 10:05:28.230375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.230393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.238545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f4298 00:26:54.759 [2024-11-20 10:05:28.239816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.239833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.247843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1430 00:26:54.759 [2024-11-20 10:05:28.249263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.257026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc998 00:26:54.759 [2024-11-20 10:05:28.258453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.258471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.264515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb048 00:26:54.759 [2024-11-20 10:05:28.265136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.265153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.273798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e73e0 00:26:54.759 [2024-11-20 10:05:28.274549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.274567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.282815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f92c0 00:26:54.759 [2024-11-20 10:05:28.283758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.283776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.291154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8e88 00:26:54.759 [2024-11-20 10:05:28.292092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.299647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ff3c8 00:26:54.759 [2024-11-20 10:05:28.300468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.300486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.308704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ff3c8 00:26:54.759 [2024-11-20 10:05:28.309456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.309475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.317649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ff3c8 00:26:54.759 [2024-11-20 10:05:28.318407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.318426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.326413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166dece0 00:26:54.759 [2024-11-20 10:05:28.327329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.759 [2024-11-20 10:05:28.327347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.759 [2024-11-20 10:05:28.335593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb8b8 00:26:55.019 [2024-11-20 10:05:28.336539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.336558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.346472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3060 00:26:55.019 [2024-11-20 10:05:28.347870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.347888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.352916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f3e60 00:26:55.019 [2024-11-20 10:05:28.353648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.353666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.363787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e99d8 00:26:55.019 [2024-11-20 10:05:28.364895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.364913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.374006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f92c0 00:26:55.019 [2024-11-20 10:05:28.375563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.375580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.380361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f46d0 00:26:55.019 [2024-11-20 10:05:28.380967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.380985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.389760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc998 00:26:55.019 [2024-11-20 10:05:28.390618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.390637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.019 [2024-11-20 10:05:28.398259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e0a68 00:26:55.019 [2024-11-20 10:05:28.399100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.019 [2024-11-20 10:05:28.399121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.407330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebfd0 00:26:55.020 [2024-11-20 10:05:28.408209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.408227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:55.020 28104.00 IOPS, 109.78 MiB/s [2024-11-20T09:05:28.602Z] [2024-11-20 10:05:28.417526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166efae0 00:26:55.020 [2024-11-20 10:05:28.418733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.425926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166feb58 00:26:55.020 [2024-11-20 10:05:28.426956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.426976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.434539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6458 00:26:55.020 [2024-11-20 10:05:28.435351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.435370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.444952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6458 00:26:55.020 [2024-11-20 10:05:28.446067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.446086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.454101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1868 00:26:55.020 [2024-11-20 10:05:28.455214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.455233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.461585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166df118 00:26:55.020 [2024-11-20 10:05:28.462332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.462351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.472481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166df118 00:26:55.020 [2024-11-20 10:05:28.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.473705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.481685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee190 00:26:55.020 [2024-11-20 10:05:28.482901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.482920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.489144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166de470 00:26:55.020 [2024-11-20 10:05:28.490113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.490130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.500109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166feb58 00:26:55.020 [2024-11-20 10:05:28.501565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.501583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.506619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8e88 00:26:55.020 [2024-11-20 10:05:28.507341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.507359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.517345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5ec8 00:26:55.020 [2024-11-20 10:05:28.518554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.518572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.525906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb480 00:26:55.020 [2024-11-20 10:05:28.526802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.526821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.534460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5658 00:26:55.020 [2024-11-20 10:05:28.535065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.535083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.543424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5658 00:26:55.020 [2024-11-20 10:05:28.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.553498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5658 00:26:55.020 [2024-11-20 10:05:28.554713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.554731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.562588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6890 00:26:55.020 [2024-11-20 10:05:28.563332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.563351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.571533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb048 00:26:55.020 [2024-11-20 10:05:28.572535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.581754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2510 00:26:55.020 [2024-11-20 10:05:28.583160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.583178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.020 [2024-11-20 10:05:28.588098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb048 00:26:55.020 [2024-11-20 10:05:28.588914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.020 [2024-11-20 10:05:28.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.597699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee5c8 00:26:55.281 [2024-11-20 10:05:28.598646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.598666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.607314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6458 00:26:55.281 [2024-11-20 10:05:28.608318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.608337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.616107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e1b48 00:26:55.281 [2024-11-20 10:05:28.616838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.616857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.624280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e49b0 00:26:55.281 [2024-11-20 10:05:28.624989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.625007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.633144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6890 00:26:55.281 [2024-11-20 10:05:28.633861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.633882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.642524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e01f8 00:26:55.281 [2024-11-20 10:05:28.643391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.643411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.653918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ea248 00:26:55.281 [2024-11-20 10:05:28.655480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.655500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.660502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8e88 00:26:55.281 [2024-11-20 10:05:28.661244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.661264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.672456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:55.281 [2024-11-20 10:05:28.674028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.674046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.679063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebfd0 00:26:55.281 [2024-11-20 10:05:28.679901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.688434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fac10 00:26:55.281 [2024-11-20 10:05:28.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.689423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.697797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee190 00:26:55.281 [2024-11-20 10:05:28.698315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.698334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.707253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f57b0 00:26:55.281 [2024-11-20 10:05:28.708001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.708020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.715708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f1430 00:26:55.281 [2024-11-20 10:05:28.716515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.716534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.724955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e0a68 00:26:55.281 [2024-11-20 10:05:28.725788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.725807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.735554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec840 00:26:55.281 [2024-11-20 10:05:28.736625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.736644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.742915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ecc78 00:26:55.281 [2024-11-20 10:05:28.743558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.743577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.751975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc998 00:26:55.281 [2024-11-20 10:05:28.752472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.761360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f57b0 00:26:55.281 [2024-11-20 10:05:28.761968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.761986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.770070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f31b8 00:26:55.281 [2024-11-20 10:05:28.770956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.770974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.778934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7100 00:26:55.281 [2024-11-20 10:05:28.779696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.788509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e8088 00:26:55.281 [2024-11-20 10:05:28.789523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:55.281 [2024-11-20 10:05:28.798088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f6458 00:26:55.281 [2024-11-20 10:05:28.799087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.281 [2024-11-20 10:05:28.799106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.807086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec408 00:26:55.282 [2024-11-20 10:05:28.808084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.817218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec408 00:26:55.282 [2024-11-20 10:05:28.818680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.818698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.826129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166df550 00:26:55.282 [2024-11-20 10:05:28.827662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.827681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.832650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e9e10 00:26:55.282 [2024-11-20 10:05:28.833490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.833508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.843665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f31b8 00:26:55.282 [2024-11-20 10:05:28.844879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:55.282 [2024-11-20 10:05:28.851045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e01f8 00:26:55.282 [2024-11-20 10:05:28.851660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.282 [2024-11-20 10:05:28.851679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.860508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2d80 00:26:55.542 [2024-11-20 10:05:28.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.861379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.870816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebfd0 00:26:55.542 [2024-11-20 10:05:28.872063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.872084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.877979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166eb760 00:26:55.542 [2024-11-20 10:05:28.878774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.878793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.887269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e8d30 00:26:55.542 [2024-11-20 10:05:28.888198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.888219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.896985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8a50 00:26:55.542 [2024-11-20 10:05:28.897676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.897695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.906044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5658 00:26:55.542 [2024-11-20 10:05:28.907008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.907026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.915919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f96f8 00:26:55.542 [2024-11-20 10:05:28.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.917289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.922020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fa3a0 00:26:55.542 [2024-11-20 10:05:28.922703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.922721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.932877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e84c0 00:26:55.542 [2024-11-20 10:05:28.933804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.933822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.942252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e7818 00:26:55.542 [2024-11-20 10:05:28.943416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.943434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.952147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e6738 00:26:55.542 [2024-11-20 10:05:28.953579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.953596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.961611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e8088 00:26:55.542 [2024-11-20 10:05:28.963133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.967947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ff3c8 00:26:55.542 [2024-11-20 10:05:28.968611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.542 [2024-11-20 10:05:28.968630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.542 [2024-11-20 10:05:28.976978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e27f0 00:26:55.542 [2024-11-20 10:05:28.977590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:28.977609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:28.986227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e99d8 00:26:55.543 [2024-11-20 10:05:28.987052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:28.987071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:28.996447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e4140 00:26:55.543 [2024-11-20 10:05:28.997686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:28.997704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.005761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3060 00:26:55.543 [2024-11-20 10:05:29.007174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.007191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.012276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f57b0 00:26:55.543 [2024-11-20 10:05:29.012974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.012992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.023087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f31b8 00:26:55.543 [2024-11-20 10:05:29.024173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.024192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.032490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166feb58 00:26:55.543 [2024-11-20 10:05:29.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.033715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.040984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e5220 00:26:55.543 [2024-11-20 10:05:29.042177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.050037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f2948 00:26:55.543 [2024-11-20 10:05:29.050744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.050763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.058449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fe2e8 00:26:55.543 [2024-11-20 10:05:29.059067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.059085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.069148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:55.543 [2024-11-20 10:05:29.070662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.070679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.075441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3060 00:26:55.543 [2024-11-20 10:05:29.076024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.076042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.085054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f92c0 00:26:55.543 [2024-11-20 10:05:29.086000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.086018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.094101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166df988 00:26:55.543 [2024-11-20 10:05:29.094609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.094627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.103439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fa3a0 00:26:55.543 [2024-11-20 10:05:29.104025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.104046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:55.543 [2024-11-20 10:05:29.112758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fac10 00:26:55.543 [2024-11-20 10:05:29.113785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.543 [2024-11-20 10:05:29.113803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.121492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc128 00:26:55.813 [2024-11-20 10:05:29.122506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.122523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.129959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e3d08 00:26:55.813 [2024-11-20 10:05:29.130635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.130653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.138833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e84c0 00:26:55.813 [2024-11-20 10:05:29.139474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.147741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7da8 00:26:55.813 [2024-11-20 10:05:29.148380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.148399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.156682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e49b0 00:26:55.813 [2024-11-20 10:05:29.157398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.157416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.165939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e1b48 00:26:55.813 [2024-11-20 10:05:29.166410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.166429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.175316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f4f40 00:26:55.813 [2024-11-20 10:05:29.175989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.176008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.185589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ee190 00:26:55.813 [2024-11-20 10:05:29.186973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.187004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.193983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8618 00:26:55.813 [2024-11-20 10:05:29.195011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.195029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.202830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f96f8 00:26:55.813 [2024-11-20 10:05:29.203818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.211900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f8618 00:26:55.813 [2024-11-20 10:05:29.212768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.212786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.220386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fd208 00:26:55.813 [2024-11-20 10:05:29.221248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.221266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.229753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e95a0 00:26:55.813 [2024-11-20 10:05:29.230780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.230797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.239149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166dece0 00:26:55.813 [2024-11-20 10:05:29.240274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.240292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.248220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f4f40 00:26:55.813 [2024-11-20 10:05:29.249319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.249337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.257394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fc128 00:26:55.813 [2024-11-20 10:05:29.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.264959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e23b8 00:26:55.813 [2024-11-20 10:05:29.265429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.265448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.275266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f96f8 00:26:55.813 [2024-11-20 10:05:29.276494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.276513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.283584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e88f8 00:26:55.813 [2024-11-20 10:05:29.284448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.284466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.292414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f7538 00:26:55.813 [2024-11-20 10:05:29.293286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.293305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.301616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fdeb0 00:26:55.813 [2024-11-20 10:05:29.302326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.302344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.309826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ff3c8 00:26:55.813 [2024-11-20 10:05:29.310655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.310673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.318663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ef6a8 00:26:55.813 [2024-11-20 10:05:29.319420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.319438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.327688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb8b8 00:26:55.813 [2024-11-20 10:05:29.328515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.328533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.336640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fb480 00:26:55.813 [2024-11-20 10:05:29.337451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.337469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.345601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ddc00 00:26:55.813 [2024-11-20 10:05:29.346412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.346429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.354600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e01f8 00:26:55.813 [2024-11-20 10:05:29.355375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.355392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.813 [2024-11-20 10:05:29.363857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f0bc0 00:26:55.813 [2024-11-20 10:05:29.364723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.813 [2024-11-20 10:05:29.364741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:55.814 [2024-11-20 10:05:29.372365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166fdeb0 00:26:55.814 [2024-11-20 10:05:29.373224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.814 [2024-11-20 10:05:29.373258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:55.814 [2024-11-20 10:05:29.382500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ec408 00:26:56.075 [2024-11-20 10:05:29.383530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.075 [2024-11-20 10:05:29.383549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:56.075 [2024-11-20 10:05:29.391834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166f81e0 00:26:56.075 [2024-11-20 10:05:29.392944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.075 [2024-11-20 10:05:29.392962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:56.075 [2024-11-20 10:05:29.400290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166ebfd0 00:26:56.075 [2024-11-20 10:05:29.401271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.075 [2024-11-20 10:05:29.401290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:56.075 [2024-11-20 10:05:29.408931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5640) with pdu=0x2000166e9e10 00:26:56.075 [2024-11-20 10:05:29.409935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.075 [2024-11-20 10:05:29.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:56.075 28223.50 IOPS, 110.25 MiB/s 00:26:56.075 Latency(us) 00:26:56.075 [2024-11-20T09:05:29.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.075 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.075 nvme0n1 : 2.00 28253.02 110.36 0.00 0.00 4526.39 1747.63 14542.75 00:26:56.075 [2024-11-20T09:05:29.657Z] =================================================================================================================== 00:26:56.075 [2024-11-20T09:05:29.657Z] Total : 28253.02 110.36 0.00 0.00 4526.39 1747.63 14542.75 00:26:56.075 { 00:26:56.075 "results": [ 00:26:56.075 { 00:26:56.075 "job": "nvme0n1", 00:26:56.075 "core_mask": "0x2", 00:26:56.075 "workload": "randwrite", 00:26:56.075 "status": "finished", 00:26:56.075 "queue_depth": 128, 00:26:56.075 "io_size": 4096, 00:26:56.075 "runtime": 2.002441, 00:26:56.075 "iops": 28253.017192516534, 00:26:56.075 "mibps": 110.36334840826771, 00:26:56.075 "io_failed": 0, 00:26:56.075 "io_timeout": 0, 00:26:56.075 "avg_latency_us": 4526.386025663363, 00:26:56.075 "min_latency_us": 1747.6266666666668, 00:26:56.075 "max_latency_us": 14542.750476190477 00:26:56.075 } 00:26:56.075 ], 00:26:56.076 "core_count": 1 00:26:56.076 } 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:56.076 | .driver_specific 00:26:56.076 | .nvme_error 00:26:56.076 | .status_code 00:26:56.076 | .command_transient_transport_error' 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2803983 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2803983 ']' 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2803983 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.076 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2803983 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2803983' 00:26:56.335 killing process with pid 2803983 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2803983 00:26:56.335 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.335 00:26:56.335 Latency(us) 00:26:56.335 [2024-11-20T09:05:29.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.335 [2024-11-20T09:05:29.917Z] =================================================================================================================== 00:26:56.335 [2024-11-20T09:05:29.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2803983 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2804462 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2804462 /var/tmp/bperf.sock 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2804462 ']' 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.335 10:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:56.335 [2024-11-20 10:05:29.898218] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:26:56.335 [2024-11-20 10:05:29.898271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804462 ] 00:26:56.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.335 Zero copy mechanism will not be used. 00:26:56.594 [2024-11-20 10:05:29.974169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.594 [2024-11-20 10:05:30.012804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.594 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.594 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:56.594 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:56.594 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.853 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.423 nvme0n1 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:57.423 10:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.423 Zero copy mechanism will not be used. 00:26:57.423 Running I/O for 2 seconds... 00:26:57.423 [2024-11-20 10:05:30.866309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.866406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.866436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.870982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.871045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.871068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.875393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.875475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.875500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.879764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.879823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.879842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.884045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.884111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.884129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.888411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.888478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.888498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.892658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.892717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.892736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.896949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.897015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.897034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.901245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.901300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.901319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.905501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.905581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.909737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.909798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.909817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.914030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.914087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.914107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.918368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.918482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.918501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.922586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.922695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.922713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.926793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.926870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.926888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.930964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.931058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.423 [2024-11-20 10:05:30.935129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.423 [2024-11-20 10:05:30.935207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.423 [2024-11-20 10:05:30.935230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.939318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.939390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.939410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.943442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.943505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.943524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.947573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.947651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.951700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.951765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.951784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.955824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.955908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.959978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.960035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.960054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.964166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.964272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.964290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.968268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.968329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.972399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.972453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.972474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.976591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.976652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.976671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.980754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.980820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.980839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.984952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.985024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.989114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.989185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.989209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.993273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.993356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.993375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.424 [2024-11-20 10:05:30.997477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.424 [2024-11-20 10:05:30.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.424 [2024-11-20 10:05:30.997557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.684 [2024-11-20 10:05:31.001663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.001724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.001743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.005876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.005940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.005959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.010082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.010137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.014259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.014335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.014353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.018424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.018503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.022584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.022643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.022662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.026710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.026779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.030841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.030917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.030936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.035007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.035078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.035097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.039147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.039226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.039245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.043309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.043386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.043405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.047471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.047528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.047546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.051708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.051764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.051782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.056256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.056365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.061282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.061336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.061354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.066392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.066476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.071261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.071318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.071336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.075997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.076056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.076075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.080730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.080857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.080876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.085409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.085463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.085485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.090010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.090069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.090087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.094604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.094660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.094678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.098929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.098999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.099018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.103518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.103586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.103604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.108123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.108182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.108207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.113172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.113241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.113260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.685 [2024-11-20 10:05:31.118612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.685 [2024-11-20 10:05:31.118665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.685 [2024-11-20 10:05:31.118683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.124175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.124314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.124333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.129865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.129937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.129956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.134816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.134896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.134914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.139550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.139639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.139658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.144067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.144121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.148501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.148570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.148590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.152986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.153111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.153129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.157405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.157505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.161828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.161892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.161910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.166271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.166342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.166361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.170652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.170720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.170739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.175075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.175143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.175162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.179728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.179831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.184818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.184889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.184907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.190234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.190290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.190308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.195332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.195394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.195413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.200898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.200977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.200995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.206745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.206887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.206906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.212415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.212483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.212505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.217800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.217897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.222767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.222821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.222839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.228263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.228345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.228363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.233433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.233485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.233503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.686 [2024-11-20 10:05:31.238271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.686 [2024-11-20 10:05:31.238391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.686 [2024-11-20 10:05:31.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.687 [2024-11-20 10:05:31.243543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.687 [2024-11-20 10:05:31.243607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.687 [2024-11-20 10:05:31.243625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.687 [2024-11-20 10:05:31.248867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.687 [2024-11-20 10:05:31.249001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.687 [2024-11-20 10:05:31.249020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.687 [2024-11-20 10:05:31.254115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.687 [2024-11-20 10:05:31.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.687 [2024-11-20 10:05:31.254282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.687 [2024-11-20 10:05:31.261358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.687 [2024-11-20 10:05:31.261492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.687 [2024-11-20 10:05:31.261511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.268363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.268494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.275572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.275707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.275726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.282551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.282701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.289221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.289290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.289309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.295792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.295954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.301826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.301898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.301918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.306664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.306723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.306742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.311072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.311132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.311151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.315464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.315518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.315536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.319871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.319953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.947 [2024-11-20 10:05:31.319972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.947 [2024-11-20 10:05:31.324396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.947 [2024-11-20 10:05:31.324474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.324493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.328816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.328883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.328903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.333239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.333295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.333314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.337770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.337837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.337856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.342246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.342324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.346848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.346927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.351337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.351419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.351443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.355688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.355766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.360276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.360353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.360372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.364669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.364729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.364747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.369255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.369320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.369339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.373886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.373966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.378630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.378685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.378703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.384342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.384486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.384505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.390421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.390591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.397228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.397369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.397388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.404941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.405008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.405027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.412725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.412861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.420811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.420960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.428445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.428600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.436459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.436640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.444738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.444893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.444912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.452996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.453084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.453103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.459333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.459410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.459429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.464611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.464668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.464687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.470260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.470338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.475162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.475222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.948 [2024-11-20 10:05:31.475241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.948 [2024-11-20 10:05:31.481068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.948 [2024-11-20 10:05:31.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.481175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.485908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.485968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.485986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.490495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.490615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.490633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.495089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.495160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.495178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.499626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.499744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.499762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.504262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.504368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.504391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.508908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.508979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.508998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.513607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.513721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.513739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.518232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.518310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.518329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:57.949 [2024-11-20 10:05:31.523022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:57.949 [2024-11-20 10:05:31.523119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.949 [2024-11-20 10:05:31.523137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.527793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.527903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.532568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.532667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.532686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.537581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.537639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.537657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.542085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.542163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.546572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.546659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.546678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.550991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.551072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.551091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.555543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.555653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.560335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.560391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.560410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.565507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.565611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.565629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.571189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.209 [2024-11-20 10:05:31.571260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.209 [2024-11-20 10:05:31.571279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.209 [2024-11-20 10:05:31.575885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.575989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.576007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.580608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.580733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.580751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.585761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.585851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.590334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.590411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.594677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.594733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.594751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.599011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.599063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.599082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.603325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.603389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.603408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.607627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.607717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.607735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.611967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.612049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.612069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.616659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.616719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.621311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.621370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.621388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.625857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.625940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.625962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.630348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.630405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.630423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.634762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.634818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.634836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.639407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.639478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.639497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.643752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.643805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.643824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.648133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.648192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.648217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.652742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.652828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.657049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.657110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.657129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.661400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.661471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.665775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.665840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.665859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.670234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.670306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.670325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.674604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.674688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.674707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.678902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.683193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.683271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.683290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.687476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.687541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.210 [2024-11-20 10:05:31.687559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.210 [2024-11-20 10:05:31.691717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.210 [2024-11-20 10:05:31.691769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.691787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.696307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.696361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.696379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.700902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.700970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.700988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.705329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.705398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.705418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.710062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.710123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.710142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.715084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.715187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.715212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.720251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.720316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.720334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.725455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.725556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.725575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.731053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.731157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.736076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.736163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.736182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.741058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.741118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.745661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.745712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.750291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.750367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.750386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.754726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.754792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.754811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.759551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.759668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.759686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.764161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.764218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.764236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.768984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.769070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.773930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.774016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.774035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.778561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.778654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.778673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.211 [2024-11-20 10:05:31.783270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.211 [2024-11-20 10:05:31.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.211 [2024-11-20 10:05:31.783364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.787680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.787750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.787773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.792367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.792429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.792448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.797113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.797193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.797218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.802080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.802151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.807434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.807577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.807595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.812164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.812230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.812249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.816877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.816948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.816967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.822427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.822558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.827738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.827845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.827864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.832792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.832867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.832885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.837488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.837559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.837578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.842183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.842248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.842267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.846572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.846632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.846651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.851178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.851247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.855886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.855975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.855993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.860986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.861071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.861089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 6378.00 IOPS, 797.25 MiB/s [2024-11-20T09:05:32.054Z] [2024-11-20 10:05:31.868006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.868163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.868182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.874821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.874963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.875003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.881825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.881974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.881994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.888855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.889010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.889029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.472 [2024-11-20 10:05:31.895568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.472 [2024-11-20 10:05:31.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.472 [2024-11-20 10:05:31.895717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.901437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.901740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.901760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.907552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.907839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.907860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.913523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.913815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.913835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.919576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.919932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.919951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.925741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.926036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.926056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.932163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.932514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.932539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.938066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.938310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.938329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.944416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.944712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.944732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.950305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.950619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.950639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.956471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.956800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.956820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.962434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.962768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.962788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.969548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.969835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.969854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.975211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.975478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.975499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.979754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.979980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.980000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.984093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.984319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.984339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.988232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.988457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.988477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.992391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.992624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.992643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:31.996507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:31.996739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:31.996758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.000632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.000866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.000886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.004700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.004935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.008772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.009008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.009028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.012901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.013140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.013159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.017041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.017290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.017309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.021235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.021462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.021481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.025530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.025770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.025790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.029919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.473 [2024-11-20 10:05:32.030156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.473 [2024-11-20 10:05:32.030175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.473 [2024-11-20 10:05:32.034367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.474 [2024-11-20 10:05:32.034626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.474 [2024-11-20 10:05:32.034645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.474 [2024-11-20 10:05:32.038822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.474 [2024-11-20 10:05:32.039074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.474 [2024-11-20 10:05:32.039094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.474 [2024-11-20 10:05:32.043249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.474 [2024-11-20 10:05:32.043505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.474 [2024-11-20 10:05:32.043524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.474 [2024-11-20 10:05:32.047607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.474 [2024-11-20 10:05:32.047846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.474 [2024-11-20 10:05:32.047865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.051989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.052228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.052247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.056396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.056662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.056685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.060823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.061051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.061070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.065140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.065388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.069523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.069755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.069774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.073900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.074131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.074150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.078268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.078511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.082563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.082813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.082832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.086991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.087245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.087265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.091373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.091630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.091649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.095704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.095959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.095978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.100129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.100366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.104383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.104594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.104613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.108579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.108768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.108786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.112594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.112785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.112811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.116717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.116908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.116925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.120731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.120899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.120917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.125112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.125264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.125282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.129730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.129866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.735 [2024-11-20 10:05:32.129884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.735 [2024-11-20 10:05:32.134815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.735 [2024-11-20 10:05:32.134963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.134982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.139407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.139546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.139564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.143453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.143597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.143616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.147291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.147452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.147480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.151138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.151299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.151316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.155014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.155154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.155171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.158909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.159044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.159062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.162705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.162880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.166684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.166849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.166870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.170469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.170621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.174181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.174356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.174374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.178069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.178246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.178264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.181913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.182055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.182072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.185721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.185889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.185906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.189464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.189610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.189628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.193207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.193377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.193395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.197013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.197165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.197183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.201568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.201710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.201728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.205345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.205524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.205542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.209117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.209311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.209328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.212892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.213036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.213054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.736 [2024-11-20 10:05:32.216792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.736 [2024-11-20 10:05:32.216928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.736 [2024-11-20 10:05:32.216946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.221117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.221187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.225867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.226010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.226029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.230140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.230286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.230305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.234155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.234293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.234311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.238102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.238283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.238301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.242131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.242304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.242321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.246554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.246696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.246714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.250471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.250620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.250638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.254301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.254464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.258093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.258238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.258256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.261887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.262041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.262059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.265748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.265892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.265910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.269522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.269697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.269718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.273271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.273412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.273430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.277023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.277172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.277190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.280978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.281117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.281134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.285760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.285902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.285920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.290258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.290404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.290421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.294269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.294409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.294427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.298245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.298405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.298423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.302245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.302419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.302437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.306410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.306566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.306584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.737 [2024-11-20 10:05:32.310545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.737 [2024-11-20 10:05:32.310693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.737 [2024-11-20 10:05:32.310713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.314597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.314739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.314757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.318641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.318788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.318806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.322710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.322858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.322876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.326705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.326862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.326880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.330631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.330798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.330816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.334592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.334763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.334780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.338706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.338855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.338873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.342685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.342857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.342875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.346728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.346891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.346910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.350765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.350928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-11-20 10:05:32.350946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.999 [2024-11-20 10:05:32.354708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:58.999 [2024-11-20 10:05:32.354868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.354886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.358704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.358843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.358861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.362612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.362742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.362760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.366597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.366734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.366752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.370676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.370821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.370839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.374617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.374763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.374784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.378486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.378656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.378673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.382457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.382600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.382618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.386545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.386687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.386705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.390530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.390674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.390694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.394581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.394735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.394754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.398502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.398638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.398655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.402545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.402699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.406530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.406674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.410516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.410664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.414541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.414685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.414703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.418581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.418700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.418718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.422583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.422737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.422754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.426597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.426738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.426756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.430558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.430728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.430745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.434566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.434719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.434737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.438499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.438653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.438671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.442510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.442650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.442668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.446468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.446628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.446645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.450690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.450824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.450841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.454844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.454995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.455013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.458774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.458929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-11-20 10:05:32.458946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.000 [2024-11-20 10:05:32.462655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.000 [2024-11-20 10:05:32.462809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.462826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.466913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.467053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.467071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.470831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.470983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.474858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.474991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.475009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.478768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.478912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.478932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.482737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.482887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.482904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.486818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.486951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.490750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.490921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.490939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.494731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.494859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.494876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.498632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.498760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.498778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.502500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.502647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.502665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.506477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.506626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.506644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.510695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.510802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.510820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.515395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.515545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.515562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.519481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.519603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.519621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.523927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.524064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.524082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.529462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.529623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.529641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.535087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.535225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.535243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.541774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.541954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.541972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.547834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.548010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.548028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.552526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.552663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.552681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.556489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.556645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.560316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.560459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.560477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.564159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.564310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.564328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.568244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.568367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.568385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.001 [2024-11-20 10:05:32.572907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.001 [2024-11-20 10:05:32.573018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.001 [2024-11-20 10:05:32.573037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.577733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.577872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.577892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.582058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.582193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.582218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.587498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.587636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.593829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.594000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.594020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.600377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.600581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.600601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.606718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.606911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.606931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.613041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.613267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.613287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.619652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.619818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.626069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.626243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.626261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.632502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.632709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.632744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.639193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.639341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.639359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.645812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.646207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.306 [2024-11-20 10:05:32.646227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.306 [2024-11-20 10:05:32.653124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.306 [2024-11-20 10:05:32.653309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.653327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.660073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.660279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.660310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.666639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.666906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.673493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.673637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.673656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.679673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.679836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.686479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.686704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.686724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.693343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.693523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.693541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.699744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.699992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.700011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.706679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.706901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.706921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.713686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.713901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.713920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.721074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.721274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.721293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.726996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.727115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.727134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.731253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.731391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.731408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.735220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.735374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.735392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.739191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.739368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.739386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.743490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.743632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.743651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.747480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.747623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.747642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.751691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.751834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.751852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.755707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.755853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.759693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.759859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.759877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.763645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.763800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.763817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.767850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.772842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.772986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.773004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.777378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.777526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.777544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.782415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.782564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.307 [2024-11-20 10:05:32.787389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.307 [2024-11-20 10:05:32.787586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.307 [2024-11-20 10:05:32.787642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.793273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.793473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.793491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.799307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.799451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.799476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.805193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.805379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.811954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.812160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.818448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.818704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.818724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.825606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.825738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.825756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.831893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.832044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.832063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.838100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.838323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.838343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.844287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.844425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.844443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.850570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.850757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.850776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.856597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.856883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.308 [2024-11-20 10:05:32.862937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.308 [2024-11-20 10:05:32.863130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.308 [2024-11-20 10:05:32.863151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.604 6476.50 IOPS, 809.56 MiB/s [2024-11-20T09:05:33.186Z] [2024-11-20 10:05:32.869632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfd5980) with pdu=0x2000166ff3c8 00:26:59.604 [2024-11-20 10:05:32.869885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.604 [2024-11-20 10:05:32.869905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.604 00:26:59.604 Latency(us) 00:26:59.604 [2024-11-20T09:05:33.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.604 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:59.604 nvme0n1 : 2.00 6470.90 808.86 0.00 0.00 2467.89 1778.83 8238.81 00:26:59.604 [2024-11-20T09:05:33.186Z] =================================================================================================================== 00:26:59.604 [2024-11-20T09:05:33.186Z] Total : 6470.90 808.86 0.00 0.00 2467.89 1778.83 8238.81 00:26:59.604 { 00:26:59.604 "results": [ 00:26:59.604 { 00:26:59.604 "job": "nvme0n1", 00:26:59.604 "core_mask": "0x2", 00:26:59.604 "workload": "randwrite", 00:26:59.604 "status": "finished", 00:26:59.604 "queue_depth": 16, 00:26:59.604 "io_size": 131072, 00:26:59.604 "runtime": 2.004823, 00:26:59.604 "iops": 6470.895435656913, 00:26:59.604 "mibps": 808.8619294571141, 00:26:59.604 "io_failed": 0, 00:26:59.604 "io_timeout": 0, 00:26:59.604 "avg_latency_us": 2467.8937873165146, 00:26:59.604 "min_latency_us": 1778.8342857142857, 00:26:59.604 "max_latency_us": 8238.81142857143 00:26:59.604 } 00:26:59.604 ], 00:26:59.604 "core_count": 1 00:26:59.604 } 00:26:59.604 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:59.604 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:59.604 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:59.604 | .driver_specific 00:26:59.604 | .nvme_error 00:26:59.604 | .status_code 00:26:59.604 | .command_transient_transport_error' 00:26:59.604 10:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 419 > 0 )) 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2804462 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2804462 ']' 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2804462 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2804462 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2804462' 00:26:59.604 killing process with pid 2804462 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2804462 00:26:59.604 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.604 00:26:59.604 Latency(us) 00:26:59.604 [2024-11-20T09:05:33.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.604 [2024-11-20T09:05:33.186Z] =================================================================================================================== 00:26:59.604 [2024-11-20T09:05:33.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.604 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2804462 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2802788 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2802788 ']' 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2802788 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802788 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802788' 00:26:59.864 killing process with pid 2802788 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2802788 00:26:59.864 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2802788 00:27:00.123 00:27:00.123 real 0m14.382s 00:27:00.123 user 0m27.516s 00:27:00.123 sys 0m4.600s 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.123 ************************************ 00:27:00.123 END TEST nvmf_digest_error 00:27:00.123 ************************************ 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.123 rmmod nvme_tcp 00:27:00.123 rmmod nvme_fabrics 00:27:00.123 rmmod nvme_keyring 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2802788 ']' 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2802788 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2802788 ']' 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2802788 00:27:00.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2802788) - No such process 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2802788 is not found' 00:27:00.123 Process with pid 2802788 is not found 00:27:00.123 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.124 10:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.125 10:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.125 00:27:02.125 real 0m37.370s 00:27:02.125 user 0m56.735s 00:27:02.125 sys 0m13.732s 00:27:02.125 10:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.125 10:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.125 ************************************ 00:27:02.125 END TEST nvmf_digest 00:27:02.125 ************************************ 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.385 ************************************ 00:27:02.385 START TEST nvmf_bdevperf 00:27:02.385 ************************************ 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:02.385 * Looking for test storage... 00:27:02.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.385 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.386 --rc genhtml_branch_coverage=1 00:27:02.386 --rc genhtml_function_coverage=1 00:27:02.386 --rc genhtml_legend=1 00:27:02.386 --rc geninfo_all_blocks=1 00:27:02.386 --rc geninfo_unexecuted_blocks=1 00:27:02.386 00:27:02.386 ' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.386 --rc genhtml_branch_coverage=1 00:27:02.386 --rc genhtml_function_coverage=1 00:27:02.386 --rc genhtml_legend=1 00:27:02.386 --rc geninfo_all_blocks=1 00:27:02.386 --rc geninfo_unexecuted_blocks=1 00:27:02.386 00:27:02.386 ' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.386 --rc genhtml_branch_coverage=1 00:27:02.386 --rc genhtml_function_coverage=1 00:27:02.386 --rc genhtml_legend=1 00:27:02.386 --rc geninfo_all_blocks=1 00:27:02.386 --rc geninfo_unexecuted_blocks=1 00:27:02.386 00:27:02.386 ' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.386 --rc genhtml_branch_coverage=1 00:27:02.386 --rc genhtml_function_coverage=1 00:27:02.386 --rc genhtml_legend=1 00:27:02.386 --rc geninfo_all_blocks=1 00:27:02.386 --rc geninfo_unexecuted_blocks=1 00:27:02.386 00:27:02.386 ' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.386 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.647 10:05:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.220 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.221 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.221 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:27:09.221 00:27:09.221 --- 10.0.0.2 ping statistics --- 00:27:09.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.221 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:27:09.221 00:27:09.221 --- 10.0.0.1 ping statistics --- 00:27:09.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.221 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2808689 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2808689 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2808689 ']' 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.221 10:05:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 [2024-11-20 10:05:41.952562] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:09.221 [2024-11-20 10:05:41.952603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.221 [2024-11-20 10:05:42.031634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:09.221 [2024-11-20 10:05:42.073893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.221 [2024-11-20 10:05:42.073931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.221 [2024-11-20 10:05:42.073938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.221 [2024-11-20 10:05:42.073944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.221 [2024-11-20 10:05:42.073949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.221 [2024-11-20 10:05:42.075390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.222 [2024-11-20 10:05:42.075408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.222 [2024-11-20 10:05:42.075413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 [2024-11-20 10:05:42.223532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 Malloc0 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.222 [2024-11-20 10:05:42.289162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.222 { 00:27:09.222 "params": { 00:27:09.222 "name": "Nvme$subsystem", 00:27:09.222 "trtype": "$TEST_TRANSPORT", 00:27:09.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.222 "adrfam": "ipv4", 00:27:09.222 "trsvcid": "$NVMF_PORT", 00:27:09.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.222 "hdgst": ${hdgst:-false}, 00:27:09.222 "ddgst": ${ddgst:-false} 00:27:09.222 }, 00:27:09.222 "method": "bdev_nvme_attach_controller" 00:27:09.222 } 00:27:09.222 EOF 00:27:09.222 )") 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:09.222 10:05:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:09.222 "params": { 00:27:09.222 "name": "Nvme1", 00:27:09.222 "trtype": "tcp", 00:27:09.222 "traddr": "10.0.0.2", 00:27:09.222 "adrfam": "ipv4", 00:27:09.222 "trsvcid": "4420", 00:27:09.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:09.222 "hdgst": false, 00:27:09.222 "ddgst": false 00:27:09.222 }, 00:27:09.222 "method": "bdev_nvme_attach_controller" 00:27:09.222 }' 00:27:09.222 [2024-11-20 10:05:42.343263] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:09.222 [2024-11-20 10:05:42.343306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808712 ] 00:27:09.222 [2024-11-20 10:05:42.417232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.222 [2024-11-20 10:05:42.458174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.222 Running I/O for 1 seconds... 00:27:10.599 11496.00 IOPS, 44.91 MiB/s 00:27:10.599 Latency(us) 00:27:10.599 [2024-11-20T09:05:44.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:10.599 Verification LBA range: start 0x0 length 0x4000 00:27:10.599 Nvme1n1 : 1.01 11537.44 45.07 0.00 0.00 11052.29 1521.37 12295.80 00:27:10.599 [2024-11-20T09:05:44.181Z] =================================================================================================================== 00:27:10.599 [2024-11-20T09:05:44.181Z] Total : 11537.44 45.07 0.00 0.00 11052.29 1521.37 12295.80 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2808953 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:10.599 { 00:27:10.599 "params": { 00:27:10.599 "name": "Nvme$subsystem", 00:27:10.599 "trtype": "$TEST_TRANSPORT", 00:27:10.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.599 "adrfam": "ipv4", 00:27:10.599 "trsvcid": "$NVMF_PORT", 00:27:10.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.599 "hdgst": ${hdgst:-false}, 00:27:10.599 "ddgst": ${ddgst:-false} 00:27:10.599 }, 00:27:10.599 "method": "bdev_nvme_attach_controller" 00:27:10.599 } 00:27:10.599 EOF 00:27:10.599 )") 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:10.599 10:05:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:10.599 "params": { 00:27:10.599 "name": "Nvme1", 00:27:10.599 "trtype": "tcp", 00:27:10.599 "traddr": "10.0.0.2", 00:27:10.599 "adrfam": "ipv4", 00:27:10.599 "trsvcid": "4420", 00:27:10.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:10.599 "hdgst": false, 00:27:10.599 "ddgst": false 00:27:10.599 }, 00:27:10.599 "method": "bdev_nvme_attach_controller" 00:27:10.599 }' 00:27:10.599 [2024-11-20 10:05:43.995083] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:10.599 [2024-11-20 10:05:43.995134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808953 ] 00:27:10.599 [2024-11-20 10:05:44.069456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.599 [2024-11-20 10:05:44.107341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.858 Running I/O for 15 seconds... 00:27:13.172 11494.00 IOPS, 44.90 MiB/s [2024-11-20T09:05:47.017Z] 11463.50 IOPS, 44.78 MiB/s [2024-11-20T09:05:47.017Z] 10:05:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2808689 00:27:13.435 10:05:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:13.435 [2024-11-20 10:05:46.962338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.435 [2024-11-20 10:05:46.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.435 [2024-11-20 10:05:46.962415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.435 [2024-11-20 10:05:46.962433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.435 [2024-11-20 10:05:46.962448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.435 [2024-11-20 10:05:46.962465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.435 [2024-11-20 10:05:46.962472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.436 [2024-11-20 10:05:46.962984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.436 [2024-11-20 10:05:46.962992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.962998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.437 [2024-11-20 10:05:46.963455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.437 [2024-11-20 10:05:46.963462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.438 [2024-11-20 10:05:46.963821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.438 [2024-11-20 10:05:46.963904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.438 [2024-11-20 10:05:46.963912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.963989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.963996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.439 [2024-11-20 10:05:46.964261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.964268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3bcf0 is same with the state(6) to be set 00:27:13.439 [2024-11-20 10:05:46.964277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:13.439 [2024-11-20 10:05:46.964282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:13.439 [2024-11-20 10:05:46.964288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:27:13.439 [2024-11-20 10:05:46.964296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.439 [2024-11-20 10:05:46.967112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.439 [2024-11-20 10:05:46.967166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.439 [2024-11-20 10:05:46.967770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:05:46.967787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.439 [2024-11-20 10:05:46.967795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.439 [2024-11-20 10:05:46.967968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.439 [2024-11-20 10:05:46.968139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.439 [2024-11-20 10:05:46.968147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.439 [2024-11-20 10:05:46.968154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.439 [2024-11-20 10:05:46.968163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.439 [2024-11-20 10:05:46.980280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.439 [2024-11-20 10:05:46.980715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.439 [2024-11-20 10:05:46.980734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.439 [2024-11-20 10:05:46.980742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.439 [2024-11-20 10:05:46.980914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.439 [2024-11-20 10:05:46.981087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.439 [2024-11-20 10:05:46.981095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.439 [2024-11-20 10:05:46.981102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.439 [2024-11-20 10:05:46.981113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.439 [2024-11-20 10:05:46.993276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.439 [2024-11-20 10:05:46.993608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:05:46.993626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.440 [2024-11-20 10:05:46.993634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.440 [2024-11-20 10:05:46.993800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.440 [2024-11-20 10:05:46.993968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.440 [2024-11-20 10:05:46.993976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.440 [2024-11-20 10:05:46.993983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.440 [2024-11-20 10:05:46.993990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.440 [2024-11-20 10:05:47.006044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.440 [2024-11-20 10:05:47.006481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.440 [2024-11-20 10:05:47.006498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.440 [2024-11-20 10:05:47.006505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.440 [2024-11-20 10:05:47.006677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.440 [2024-11-20 10:05:47.006848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.440 [2024-11-20 10:05:47.006857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.440 [2024-11-20 10:05:47.006863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.440 [2024-11-20 10:05:47.006869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.018883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.019311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.019329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.019336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.019503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.019688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.019697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.019703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.019709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.031743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.032164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.032184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.032192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.032365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.032533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.032541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.032547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.032554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.044580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.044974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.044990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.044998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.045155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.045340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.045349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.045355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.045361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.057492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.057886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.057903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.057910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.058076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.058249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.058258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.058265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.058271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.070252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.070658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.070701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.070725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.071164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.071350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.071359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.071366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.071371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.083057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.083481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.083498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.083505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.083672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.083838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.083846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.083852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.083858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.095864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.096268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.096314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.096338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.096916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.097511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.097537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.097558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.097577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.108786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.109238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.109284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.109308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.109884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.110480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.110506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.110513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.110520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.121672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.701 [2024-11-20 10:05:47.122089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.701 [2024-11-20 10:05:47.122106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.701 [2024-11-20 10:05:47.122113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.701 [2024-11-20 10:05:47.122284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.701 [2024-11-20 10:05:47.122451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.701 [2024-11-20 10:05:47.122459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.701 [2024-11-20 10:05:47.122465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.701 [2024-11-20 10:05:47.122472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.701 [2024-11-20 10:05:47.134487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.134924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.134941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.134948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.135114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.135286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.135295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.135301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.135308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.147355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.147706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.147722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.147730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.147896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.148063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.148071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.148077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.148083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.160210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.160650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.160667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.160674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.160831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.160989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.160997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.161003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.161009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.173123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.173533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.173549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.173557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.173723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.173889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.173897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.173904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.173910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.185868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.186284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.186300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.186307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.186474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.186640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.186648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.186654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.186660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.198857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.199297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.199317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.199325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.199494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.199652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.199660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.199665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.199672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.211594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.211986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.212002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.212009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.212167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.212352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.212361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.212367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.212373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.224700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.225064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.225082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.225090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.225268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.225447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.225457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.225463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.225469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.237719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.238157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.238175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.238183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.238364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.238537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.238546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.238553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.238559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.702 [2024-11-20 10:05:47.250558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.702 [2024-11-20 10:05:47.250957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.702 [2024-11-20 10:05:47.251003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.702 [2024-11-20 10:05:47.251026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.702 [2024-11-20 10:05:47.251632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.702 [2024-11-20 10:05:47.251801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.702 [2024-11-20 10:05:47.251810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.702 [2024-11-20 10:05:47.251816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.702 [2024-11-20 10:05:47.251822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.703 [2024-11-20 10:05:47.263403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.703 [2024-11-20 10:05:47.263829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-11-20 10:05:47.263846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-11-20 10:05:47.263854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.703 [2024-11-20 10:05:47.264022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.703 [2024-11-20 10:05:47.264188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.703 [2024-11-20 10:05:47.264196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.703 [2024-11-20 10:05:47.264209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.703 [2024-11-20 10:05:47.264215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.703 [2024-11-20 10:05:47.276385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.703 [2024-11-20 10:05:47.276790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.703 [2024-11-20 10:05:47.276808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.703 [2024-11-20 10:05:47.276816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.703 [2024-11-20 10:05:47.276987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.703 [2024-11-20 10:05:47.277159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.703 [2024-11-20 10:05:47.277167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.703 [2024-11-20 10:05:47.277177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.703 [2024-11-20 10:05:47.277184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.963 [2024-11-20 10:05:47.289289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.963 [2024-11-20 10:05:47.289664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:05:47.289710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-11-20 10:05:47.289733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.963 [2024-11-20 10:05:47.290324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.963 [2024-11-20 10:05:47.290909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.963 [2024-11-20 10:05:47.290936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.963 [2024-11-20 10:05:47.290958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.963 [2024-11-20 10:05:47.290983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.963 [2024-11-20 10:05:47.302255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.963 [2024-11-20 10:05:47.302629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:05:47.302646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-11-20 10:05:47.302654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.963 [2024-11-20 10:05:47.302825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.963 [2024-11-20 10:05:47.302996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.963 [2024-11-20 10:05:47.303005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.963 [2024-11-20 10:05:47.303011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.963 [2024-11-20 10:05:47.303018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.963 [2024-11-20 10:05:47.315179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.963 [2024-11-20 10:05:47.315534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.963 [2024-11-20 10:05:47.315550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.963 [2024-11-20 10:05:47.315557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.963 [2024-11-20 10:05:47.315724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.963 [2024-11-20 10:05:47.315890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.963 [2024-11-20 10:05:47.315899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.963 [2024-11-20 10:05:47.315905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.963 [2024-11-20 10:05:47.315911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.963 [2024-11-20 10:05:47.327932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.963 [2024-11-20 10:05:47.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.328293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.328300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.328466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.328633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.328641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.328647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.328653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.340819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.341172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.341230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.341256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.341830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.342002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.342010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.342016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.342022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.353643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.353941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.353957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.353964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.354130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.354303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.354312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.354318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.354324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.366453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.366801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.366818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.366829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.366995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.367161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.367169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.367175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.367181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.379282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.379590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.379634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.379657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.380193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.380365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.380374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.380381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.380387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.392175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.392542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.392549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.392716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.392883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.392891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.392897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.392903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.405032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.405382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.405399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.405406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.405572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.405741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.405749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.405755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.405761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.417877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.418219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.418235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.418242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.418409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.418576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.418584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.418590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.418595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 [2024-11-20 10:05:47.430693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.430976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.430992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.430999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.431166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.431337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.431347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.431353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.431359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.964 9573.33 IOPS, 37.40 MiB/s [2024-11-20T09:05:47.546Z] [2024-11-20 10:05:47.443627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.964 [2024-11-20 10:05:47.443964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.964 [2024-11-20 10:05:47.443981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.964 [2024-11-20 10:05:47.443989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.964 [2024-11-20 10:05:47.444155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.964 [2024-11-20 10:05:47.444329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.964 [2024-11-20 10:05:47.444337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.964 [2024-11-20 10:05:47.444347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.964 [2024-11-20 10:05:47.444353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.456501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.456950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.456957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.457123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.457311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.457319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.457326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.457332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.469224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.469601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.469619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.469627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.469793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.469960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.469968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.469974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.469981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.482291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.482578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.482595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.482602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.482773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.482945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.482953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.482959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.482966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.495242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.495686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.495694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.495861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.496028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.496036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.496042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.496048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.508127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.508501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.508517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.508525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.508705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.508876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.508885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.508891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.508897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.520954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.521309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.521379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.521957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.522385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.522393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.522400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.522406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:13.965 [2024-11-20 10:05:47.533722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:13.965 [2024-11-20 10:05:47.534095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.965 [2024-11-20 10:05:47.534111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:13.965 [2024-11-20 10:05:47.534122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:13.965 [2024-11-20 10:05:47.534296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:13.965 [2024-11-20 10:05:47.534463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:13.965 [2024-11-20 10:05:47.534471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:13.965 [2024-11-20 10:05:47.534477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:13.965 [2024-11-20 10:05:47.534484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.546727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.547116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.547133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.547140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.547316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.547488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.547496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.547502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.547508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.559568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.559965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.559981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.559988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.560154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.560325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.560334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.560340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.560347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.572403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.572788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.572833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.572857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.573448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.573643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.573652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.573658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.573664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.585223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.585593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.585609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.585617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.585782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.585949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.585957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.585963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.585970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.598042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.598468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.598485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.598492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.598658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.598826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.598834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.598840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.598846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.610911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.611191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.611216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.611224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.611390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.611557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.611566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.611574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.611580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.623846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.624258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.624274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.624281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.624447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.624614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.624622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.624628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.624634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.636717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.636993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.637009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.226 [2024-11-20 10:05:47.637016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.226 [2024-11-20 10:05:47.637182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.226 [2024-11-20 10:05:47.637356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.226 [2024-11-20 10:05:47.637365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.226 [2024-11-20 10:05:47.637372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.226 [2024-11-20 10:05:47.637378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.226 [2024-11-20 10:05:47.649595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.226 [2024-11-20 10:05:47.649956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.226 [2024-11-20 10:05:47.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.650024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.650614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.651019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.651035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.651049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.651062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.664450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.664954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.664978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.664988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.665249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.665504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.665516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.665525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.665535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.677478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.677931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.677976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.677999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.678591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.678869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.678878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.678884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.678891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.690407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.690785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.690792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.690958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.691126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.691133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.691140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.691145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.703204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.703561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.703577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.703587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.703754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.703921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.703928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.703934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.703941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.716014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.716389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.716406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.716413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.716579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.716745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.716753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.716759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.716765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.728823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.729263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.729283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.729291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.729458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.729625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.729633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.729640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.729646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.227 [2024-11-20 10:05:47.741959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.227 [2024-11-20 10:05:47.742307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.227 [2024-11-20 10:05:47.742325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.227 [2024-11-20 10:05:47.742336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.227 [2024-11-20 10:05:47.742510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.227 [2024-11-20 10:05:47.742686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.227 [2024-11-20 10:05:47.742695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.227 [2024-11-20 10:05:47.742701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.227 [2024-11-20 10:05:47.742708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.228 [2024-11-20 10:05:47.754776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.228 [2024-11-20 10:05:47.755235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-11-20 10:05:47.755282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.228 [2024-11-20 10:05:47.755305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.228 [2024-11-20 10:05:47.755882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.228 [2024-11-20 10:05:47.756348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.228 [2024-11-20 10:05:47.756357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.228 [2024-11-20 10:05:47.756363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.228 [2024-11-20 10:05:47.756370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.228 [2024-11-20 10:05:47.767615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.228 [2024-11-20 10:05:47.768057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-11-20 10:05:47.768073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.228 [2024-11-20 10:05:47.768081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.228 [2024-11-20 10:05:47.768253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.228 [2024-11-20 10:05:47.768420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.228 [2024-11-20 10:05:47.768428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.228 [2024-11-20 10:05:47.768434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.228 [2024-11-20 10:05:47.768440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.228 [2024-11-20 10:05:47.780334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.228 [2024-11-20 10:05:47.780758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-11-20 10:05:47.780802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.228 [2024-11-20 10:05:47.780826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.228 [2024-11-20 10:05:47.781270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.228 [2024-11-20 10:05:47.781438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.228 [2024-11-20 10:05:47.781447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.228 [2024-11-20 10:05:47.781456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.228 [2024-11-20 10:05:47.781463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.228 [2024-11-20 10:05:47.793060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.228 [2024-11-20 10:05:47.793466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.228 [2024-11-20 10:05:47.793483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.228 [2024-11-20 10:05:47.793491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.228 [2024-11-20 10:05:47.793657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.228 [2024-11-20 10:05:47.793823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.228 [2024-11-20 10:05:47.793831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.228 [2024-11-20 10:05:47.793837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.228 [2024-11-20 10:05:47.793843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.489 [2024-11-20 10:05:47.806049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.489 [2024-11-20 10:05:47.806485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.489 [2024-11-20 10:05:47.806502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.806509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.806674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.806841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.806849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.806855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.806861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.818831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.819224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.819265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.819290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.819867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.820149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.820156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.820162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.820168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.831635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.832072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.832079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.832257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.832425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.832433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.832439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.832445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.844402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.844812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.844827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.844834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.844992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.845150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.845158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.845163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.845169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.857196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.857599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.857615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.857622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.857779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.857937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.857944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.857950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.857956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.869917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.870337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.870353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.870362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.870520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.870678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.870685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.870691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.870697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.882747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.883177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.883234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.883258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.883835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.884426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.884452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.884472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.884497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.895677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.896037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.896054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.896061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.896232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.896399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.896407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.896413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.490 [2024-11-20 10:05:47.896419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.490 [2024-11-20 10:05:47.908520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.490 [2024-11-20 10:05:47.908888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.490 [2024-11-20 10:05:47.908904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.490 [2024-11-20 10:05:47.908911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.490 [2024-11-20 10:05:47.909078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.490 [2024-11-20 10:05:47.909255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.490 [2024-11-20 10:05:47.909264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.490 [2024-11-20 10:05:47.909271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.909276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.921411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.921821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.921837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.921844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.922010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.922176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.922184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.922190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.922197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.934221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.934574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.934590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.934598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.934763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.934930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.934938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.934944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.934950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.947026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.947369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.947386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.947393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.947559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.947726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.947734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.947740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.947750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.959822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.960218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.960234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.960242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.960408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.960575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.960583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.960589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.960595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.972674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.973028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.973051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.973223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.973392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.973400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.973407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.973413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.985622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.986046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.986064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.986071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.986244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.986432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.986441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.986447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.986454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:47.998624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:47.999047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:47.999064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:47.999072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:47.999254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:47.999427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:47.999436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.491 [2024-11-20 10:05:47.999442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.491 [2024-11-20 10:05:47.999449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.491 [2024-11-20 10:05:48.011516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.491 [2024-11-20 10:05:48.011872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.491 [2024-11-20 10:05:48.011888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.491 [2024-11-20 10:05:48.011896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.491 [2024-11-20 10:05:48.012062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.491 [2024-11-20 10:05:48.012236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.491 [2024-11-20 10:05:48.012245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.492 [2024-11-20 10:05:48.012251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.492 [2024-11-20 10:05:48.012257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.492 [2024-11-20 10:05:48.024374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.492 [2024-11-20 10:05:48.024820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.492 [2024-11-20 10:05:48.024865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.492 [2024-11-20 10:05:48.024889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.492 [2024-11-20 10:05:48.025481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.492 [2024-11-20 10:05:48.025904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.492 [2024-11-20 10:05:48.025911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.492 [2024-11-20 10:05:48.025918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.492 [2024-11-20 10:05:48.025924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.492 [2024-11-20 10:05:48.037153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.492 [2024-11-20 10:05:48.037597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.492 [2024-11-20 10:05:48.037643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.492 [2024-11-20 10:05:48.037666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.492 [2024-11-20 10:05:48.038266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.492 [2024-11-20 10:05:48.038799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.492 [2024-11-20 10:05:48.038807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.492 [2024-11-20 10:05:48.038813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.492 [2024-11-20 10:05:48.038819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.492 [2024-11-20 10:05:48.049991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.492 [2024-11-20 10:05:48.050386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.492 [2024-11-20 10:05:48.050402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.492 [2024-11-20 10:05:48.050409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.492 [2024-11-20 10:05:48.050567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.492 [2024-11-20 10:05:48.050725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.492 [2024-11-20 10:05:48.050733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.492 [2024-11-20 10:05:48.050739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.492 [2024-11-20 10:05:48.050745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.492 [2024-11-20 10:05:48.062929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.492 [2024-11-20 10:05:48.063290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.492 [2024-11-20 10:05:48.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.492 [2024-11-20 10:05:48.063315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.492 [2024-11-20 10:05:48.063486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.492 [2024-11-20 10:05:48.063657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.492 [2024-11-20 10:05:48.063665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.492 [2024-11-20 10:05:48.063671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.492 [2024-11-20 10:05:48.063677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.753 [2024-11-20 10:05:48.075933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.753 [2024-11-20 10:05:48.076379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-20 10:05:48.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.753 [2024-11-20 10:05:48.076447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.753 [2024-11-20 10:05:48.076903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.753 [2024-11-20 10:05:48.077062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.753 [2024-11-20 10:05:48.077072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.753 [2024-11-20 10:05:48.077078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.753 [2024-11-20 10:05:48.077084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.753 [2024-11-20 10:05:48.088657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.753 [2024-11-20 10:05:48.089027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-20 10:05:48.089043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.753 [2024-11-20 10:05:48.089050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.753 [2024-11-20 10:05:48.089213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.753 [2024-11-20 10:05:48.089396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.753 [2024-11-20 10:05:48.089404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.753 [2024-11-20 10:05:48.089410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.753 [2024-11-20 10:05:48.089416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.753 [2024-11-20 10:05:48.101416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.753 [2024-11-20 10:05:48.101848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-20 10:05:48.101892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.753 [2024-11-20 10:05:48.101916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.753 [2024-11-20 10:05:48.102506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.753 [2024-11-20 10:05:48.102953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.753 [2024-11-20 10:05:48.102960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.753 [2024-11-20 10:05:48.102967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.753 [2024-11-20 10:05:48.102973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.753 [2024-11-20 10:05:48.114218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.753 [2024-11-20 10:05:48.114635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.753 [2024-11-20 10:05:48.114651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.753 [2024-11-20 10:05:48.114658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.753 [2024-11-20 10:05:48.114815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.753 [2024-11-20 10:05:48.114973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.753 [2024-11-20 10:05:48.114980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.753 [2024-11-20 10:05:48.114986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.753 [2024-11-20 10:05:48.114997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.126960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.127355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.127371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.127378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.127536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.127694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.127701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.127707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.127713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.139732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.140165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.140248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.140764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.140931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.140938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.140944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.140950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.152480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.152886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.152893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.153050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.153213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.153221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.153243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.153250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.165242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.165600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.165643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.165666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.166198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.166600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.166617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.166631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.166644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.180136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.180603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.180648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.180670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.181235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.181489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.181501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.181510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.181519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.193022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.193454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.193470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.193478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.193644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.193810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.193818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.193824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.193830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.205909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.206321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.206337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.206345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.206516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.206674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.206681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.206687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.206693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.218848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.219266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.219282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.219289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.219456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.219623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.219630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.754 [2024-11-20 10:05:48.219636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.754 [2024-11-20 10:05:48.219643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.754 [2024-11-20 10:05:48.231682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.754 [2024-11-20 10:05:48.232103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.754 [2024-11-20 10:05:48.232119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.754 [2024-11-20 10:05:48.232126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.754 [2024-11-20 10:05:48.232304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.754 [2024-11-20 10:05:48.232475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.754 [2024-11-20 10:05:48.232483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.232489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.232496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.244530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.244872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.244889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.244896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.245064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.245255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.245268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.245275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.245281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.257618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.258003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.258019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.258027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.258211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.258385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.258393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.258400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.258406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.270479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.270889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.270906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.270913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.271080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.271255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.271265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.271271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.271277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.283267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.283653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.283676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.283834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.283991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.283999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.284005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.284014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.296073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.296487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.296504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.296511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.296676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.296843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.296851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.296857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.296863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.308864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.309214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.309231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.309238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.309404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.309571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.309578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.309584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.309591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:14.755 [2024-11-20 10:05:48.321607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:14.755 [2024-11-20 10:05:48.321993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.755 [2024-11-20 10:05:48.322009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:14.755 [2024-11-20 10:05:48.322015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:14.755 [2024-11-20 10:05:48.322172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:14.755 [2024-11-20 10:05:48.322359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:14.755 [2024-11-20 10:05:48.322367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:14.755 [2024-11-20 10:05:48.322373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:14.755 [2024-11-20 10:05:48.322379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.016 [2024-11-20 10:05:48.334485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.016 [2024-11-20 10:05:48.334887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:05:48.334940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.016 [2024-11-20 10:05:48.334964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.016 [2024-11-20 10:05:48.335529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.016 [2024-11-20 10:05:48.335701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.016 [2024-11-20 10:05:48.335709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.016 [2024-11-20 10:05:48.335715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.016 [2024-11-20 10:05:48.335722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.016 [2024-11-20 10:05:48.347266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.016 [2024-11-20 10:05:48.347662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:05:48.347678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.016 [2024-11-20 10:05:48.347684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.016 [2024-11-20 10:05:48.347841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.016 [2024-11-20 10:05:48.347999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.016 [2024-11-20 10:05:48.348006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.016 [2024-11-20 10:05:48.348012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.016 [2024-11-20 10:05:48.348018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.016 [2024-11-20 10:05:48.360013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.016 [2024-11-20 10:05:48.360440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:05:48.360457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.016 [2024-11-20 10:05:48.360464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.016 [2024-11-20 10:05:48.360631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.016 [2024-11-20 10:05:48.360797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.016 [2024-11-20 10:05:48.360805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.016 [2024-11-20 10:05:48.360811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.016 [2024-11-20 10:05:48.360818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.016 [2024-11-20 10:05:48.372728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.016 [2024-11-20 10:05:48.373143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.016 [2024-11-20 10:05:48.373159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.016 [2024-11-20 10:05:48.373166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.016 [2024-11-20 10:05:48.373342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.373510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.373518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.373525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.373531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.385568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.385908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.385925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.385932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.386098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.386271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.386280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.386286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.386292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.398317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.398753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.398799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.398822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.399345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.399735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.399751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.399766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.399779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.413085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.413590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.413612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.413623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.413874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.414127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.414142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.414151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.414161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.425990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.426343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.426360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.426367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.426534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.426701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.426708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.426715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.426721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.438791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.439188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.439245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.439269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.439730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.439897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.439905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.439911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.439917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 7180.00 IOPS, 28.05 MiB/s [2024-11-20T09:05:48.599Z] [2024-11-20 10:05:48.453919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.454416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.454439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.454449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.454701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.454955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.454966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.454975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.454988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.466840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.467234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.467252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.467259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.467425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.467591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.017 [2024-11-20 10:05:48.467599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.017 [2024-11-20 10:05:48.467605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.017 [2024-11-20 10:05:48.467611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.017 [2024-11-20 10:05:48.479665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.017 [2024-11-20 10:05:48.480083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.017 [2024-11-20 10:05:48.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.017 [2024-11-20 10:05:48.480106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.017 [2024-11-20 10:05:48.480280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.017 [2024-11-20 10:05:48.480447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.480455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.480461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.480467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.492458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.492871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.492887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.492894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.493061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.493234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.493242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.493249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.493255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.505292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.505694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.505714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.505722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.505889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.506055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.506063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.506069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.506076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.518427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.518837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.518855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.518863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.519034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.519214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.519224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.519230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.519237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.531332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.531746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.531792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.531817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.532411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.532967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.532975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.532981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.532987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.544236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.544656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.544673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.544680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.544851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.545017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.545025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.545031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.545037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.556943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.557372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.557388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.557395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.557553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.557710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.557717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.557723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.557729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.569725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.570115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.570131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.570138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.570323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.570489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.018 [2024-11-20 10:05:48.570497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.018 [2024-11-20 10:05:48.570503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.018 [2024-11-20 10:05:48.570510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.018 [2024-11-20 10:05:48.582506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.018 [2024-11-20 10:05:48.582899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.018 [2024-11-20 10:05:48.582915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.018 [2024-11-20 10:05:48.582922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.018 [2024-11-20 10:05:48.583079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.018 [2024-11-20 10:05:48.583260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.019 [2024-11-20 10:05:48.583272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.019 [2024-11-20 10:05:48.583278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.019 [2024-11-20 10:05:48.583284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.595592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.595991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.596007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.596014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.596180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.279 [2024-11-20 10:05:48.596353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.279 [2024-11-20 10:05:48.596362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.279 [2024-11-20 10:05:48.596368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.279 [2024-11-20 10:05:48.596374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.608428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.608846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.608862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.608870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.609036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.279 [2024-11-20 10:05:48.609209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.279 [2024-11-20 10:05:48.609217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.279 [2024-11-20 10:05:48.609224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.279 [2024-11-20 10:05:48.609230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.621269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.621657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.621673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.621680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.621837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.279 [2024-11-20 10:05:48.621995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.279 [2024-11-20 10:05:48.622003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.279 [2024-11-20 10:05:48.622009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.279 [2024-11-20 10:05:48.622014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.634072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.634489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.634506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.634513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.634678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.279 [2024-11-20 10:05:48.634845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.279 [2024-11-20 10:05:48.634853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.279 [2024-11-20 10:05:48.634859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.279 [2024-11-20 10:05:48.634865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.646820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.647209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.647225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.647231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.647389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.279 [2024-11-20 10:05:48.647546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.279 [2024-11-20 10:05:48.647554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.279 [2024-11-20 10:05:48.647559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.279 [2024-11-20 10:05:48.647565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.279 [2024-11-20 10:05:48.659625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.279 [2024-11-20 10:05:48.660046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.279 [2024-11-20 10:05:48.660063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.279 [2024-11-20 10:05:48.660070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.279 [2024-11-20 10:05:48.660244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.660411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.660419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.660425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.660431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.672373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.672801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.672854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.672878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.673384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.673552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.673561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.673567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.673573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.685271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.685658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.685675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.685682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.685849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.686015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.686023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.686030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.686036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.698047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.698431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.698503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.699081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.699554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.699563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.699568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.699575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.710910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.711335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.711354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.711362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.711536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.711708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.711717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.711724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.711730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.723764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.724200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.724230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.724396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.724562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.724571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.724578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.724585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.736864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.737269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.737286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.737293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.737471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.737644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.737652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.737659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.737665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.749759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.750226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.750271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.750295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.280 [2024-11-20 10:05:48.750728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.280 [2024-11-20 10:05:48.750895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.280 [2024-11-20 10:05:48.750903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.280 [2024-11-20 10:05:48.750914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.280 [2024-11-20 10:05:48.750921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.280 [2024-11-20 10:05:48.762598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.280 [2024-11-20 10:05:48.763042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.280 [2024-11-20 10:05:48.763060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.280 [2024-11-20 10:05:48.763067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.763241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.763409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.763418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.763425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.763431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.775616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.776026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.776043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.776051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.776234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.776409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.776417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.776423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.776430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.788544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.788927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.788952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.789118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.789310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.789319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.789326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.789332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.801440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.801814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.801837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.802003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.802169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.802177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.802183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.802190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.814255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.814553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.814570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.814578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.814744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.814910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.814917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.814924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.814930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.827213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.827584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.827601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.827608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.827774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.827940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.827948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.827954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.827961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.840095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.840472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.840488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.840499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.840665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.840832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.840840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.840846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.840853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.281 [2024-11-20 10:05:48.853183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.281 [2024-11-20 10:05:48.853528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.281 [2024-11-20 10:05:48.853545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.281 [2024-11-20 10:05:48.853553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.281 [2024-11-20 10:05:48.853723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.281 [2024-11-20 10:05:48.853895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.281 [2024-11-20 10:05:48.853903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.281 [2024-11-20 10:05:48.853909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.281 [2024-11-20 10:05:48.853916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.866120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.866512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.866528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.866536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.866701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.866868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.866875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.866882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.866888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.879018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.879397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.879413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.879421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.879587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.879756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.879764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.879770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.879776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.891927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.892295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.892312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.892320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.892486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.892653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.892661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.892667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.892673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.904788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.905227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.905245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.905252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.905418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.905585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.905593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.905599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.905605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.917613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.918064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.918084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.918091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.918265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.918433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.918441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.918451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.918457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.930560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.930857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.930873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.543 [2024-11-20 10:05:48.930881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.543 [2024-11-20 10:05:48.931046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.543 [2024-11-20 10:05:48.931219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.543 [2024-11-20 10:05:48.931228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.543 [2024-11-20 10:05:48.931234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.543 [2024-11-20 10:05:48.931240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.543 [2024-11-20 10:05:48.943374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.543 [2024-11-20 10:05:48.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.543 [2024-11-20 10:05:48.943733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:48.943740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:48.943906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:48.944073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:48.944081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:48.944087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:48.944093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:48.956212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:48.956598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:48.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:48.956622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:48.956789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:48.956956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:48.956964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:48.956970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:48.956976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:48.969032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:48.969385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:48.969402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:48.969409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:48.969574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:48.969741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:48.969749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:48.969755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:48.969761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:48.981826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:48.982241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:48.982257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:48.982264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:48.982430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:48.982597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:48.982605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:48.982612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:48.982618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:48.994724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:48.995176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:48.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:48.995259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:48.995836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:48.996004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:48.996011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:48.996017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:48.996024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:49.009698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:49.010243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:49.010267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:49.010282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:49.010534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:49.010789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:49.010800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:49.010809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:49.010818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:49.022706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:49.023133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:49.023151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:49.023159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:49.023334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:49.023502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:49.023511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:49.023518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:49.023524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:49.035709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:49.036109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:49.036127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:49.036135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:49.036315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:49.036487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:49.036496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:49.036503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:49.036509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:49.048584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:49.048963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:49.048979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:49.048986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:49.049152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:49.049329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:49.049338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:49.049344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.544 [2024-11-20 10:05:49.049350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.544 [2024-11-20 10:05:49.061573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.544 [2024-11-20 10:05:49.062050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.544 [2024-11-20 10:05:49.062068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.544 [2024-11-20 10:05:49.062075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.544 [2024-11-20 10:05:49.062253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.544 [2024-11-20 10:05:49.062426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.544 [2024-11-20 10:05:49.062434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.544 [2024-11-20 10:05:49.062440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.545 [2024-11-20 10:05:49.062447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.545 [2024-11-20 10:05:49.074415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.545 [2024-11-20 10:05:49.074799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.545 [2024-11-20 10:05:49.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.545 [2024-11-20 10:05:49.074825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.545 [2024-11-20 10:05:49.074991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.545 [2024-11-20 10:05:49.075157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.545 [2024-11-20 10:05:49.075165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.545 [2024-11-20 10:05:49.075172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.545 [2024-11-20 10:05:49.075178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.545 [2024-11-20 10:05:49.087268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.545 [2024-11-20 10:05:49.087593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.545 [2024-11-20 10:05:49.087609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.545 [2024-11-20 10:05:49.087616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.545 [2024-11-20 10:05:49.087783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.545 [2024-11-20 10:05:49.087950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.545 [2024-11-20 10:05:49.087957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.545 [2024-11-20 10:05:49.087967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.545 [2024-11-20 10:05:49.087974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.545 [2024-11-20 10:05:49.100048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.545 [2024-11-20 10:05:49.100356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.545 [2024-11-20 10:05:49.100400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.545 [2024-11-20 10:05:49.100424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.545 [2024-11-20 10:05:49.101000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.545 [2024-11-20 10:05:49.101491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.545 [2024-11-20 10:05:49.101501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.545 [2024-11-20 10:05:49.101507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.545 [2024-11-20 10:05:49.101513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.545 [2024-11-20 10:05:49.112828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.545 [2024-11-20 10:05:49.113252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.545 [2024-11-20 10:05:49.113270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.545 [2024-11-20 10:05:49.113277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.545 [2024-11-20 10:05:49.113442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.545 [2024-11-20 10:05:49.113612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.545 [2024-11-20 10:05:49.113619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.545 [2024-11-20 10:05:49.113627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.545 [2024-11-20 10:05:49.113633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.806 [2024-11-20 10:05:49.125793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.806 [2024-11-20 10:05:49.126200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.806 [2024-11-20 10:05:49.126222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.806 [2024-11-20 10:05:49.126230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.806 [2024-11-20 10:05:49.126401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.806 [2024-11-20 10:05:49.126573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.806 [2024-11-20 10:05:49.126580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.806 [2024-11-20 10:05:49.126586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.806 [2024-11-20 10:05:49.126592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.806 [2024-11-20 10:05:49.138665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.806 [2024-11-20 10:05:49.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.806 [2024-11-20 10:05:49.139101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.806 [2024-11-20 10:05:49.139108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.806 [2024-11-20 10:05:49.139279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.139446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.139454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.139460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.139466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.151435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.151843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.151860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.151867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.152032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.152199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.152213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.152219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.152225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.164207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.164598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.164613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.164620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.164778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.164935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.164942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.164948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.164954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.177045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.177486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.177653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.177820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.177827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.177833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.177839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.190011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.190408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.190425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.190432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.190597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.190764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.190772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.190778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.190784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.202852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.203246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.203262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.203269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.203427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.203584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.203592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.203598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.203603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.215713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.216128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.216144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.216151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.216325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.216495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.216503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.216510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.216516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.228566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.228985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.229001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.229008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.229174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.229346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.229355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.229361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.229367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.241406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.241750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.241819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.242420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.242589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.242597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.242603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.242610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.254174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.254623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.254639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.254646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.254813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.254979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.254987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.254996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.255002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.267076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.267429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.267475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.267498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.268076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.268670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.268697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.268718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.807 [2024-11-20 10:05:49.268737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.807 [2024-11-20 10:05:49.279855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.807 [2024-11-20 10:05:49.280293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.807 [2024-11-20 10:05:49.280311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.807 [2024-11-20 10:05:49.280319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.807 [2024-11-20 10:05:49.280490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.807 [2024-11-20 10:05:49.280662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.807 [2024-11-20 10:05:49.280671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.807 [2024-11-20 10:05:49.280678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.280684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.292857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.293289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.293307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.293314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.293488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.293661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.293669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.293676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.293682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.305689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.306126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.306142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.306149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.306322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.306490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.306498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.306504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.306510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.318418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.318836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.318882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.318905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.319496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.320026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.320043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.320056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.320070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.333295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.333796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.333818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.333828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.334080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.334341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.334353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.334363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.334372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.346325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.346812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.346843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.347436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.347958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.347966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.347973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.347978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.359080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.359514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.359530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.359537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.359703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.359871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.359879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.359885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.359891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:15.808 [2024-11-20 10:05:49.371920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:15.808 [2024-11-20 10:05:49.372342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.808 [2024-11-20 10:05:49.372359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:15.808 [2024-11-20 10:05:49.372366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:15.808 [2024-11-20 10:05:49.372537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:15.808 [2024-11-20 10:05:49.372695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:15.808 [2024-11-20 10:05:49.372703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:15.808 [2024-11-20 10:05:49.372708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:15.808 [2024-11-20 10:05:49.372714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.068 [2024-11-20 10:05:49.384911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.068 [2024-11-20 10:05:49.385342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 10:05:49.385359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.068 [2024-11-20 10:05:49.385366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.068 [2024-11-20 10:05:49.385538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.068 [2024-11-20 10:05:49.385709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.068 [2024-11-20 10:05:49.385723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.068 [2024-11-20 10:05:49.385730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.068 [2024-11-20 10:05:49.385736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.068 [2024-11-20 10:05:49.397627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.068 [2024-11-20 10:05:49.398044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 10:05:49.398089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.068 [2024-11-20 10:05:49.398112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.068 [2024-11-20 10:05:49.398702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.068 [2024-11-20 10:05:49.399209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.068 [2024-11-20 10:05:49.399218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.068 [2024-11-20 10:05:49.399224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.068 [2024-11-20 10:05:49.399230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.068 [2024-11-20 10:05:49.410393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.068 [2024-11-20 10:05:49.410815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 10:05:49.410831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.068 [2024-11-20 10:05:49.410837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.068 [2024-11-20 10:05:49.410995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.068 [2024-11-20 10:05:49.411153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.068 [2024-11-20 10:05:49.411160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.411166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.411172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.423234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.423615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.423660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.423684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.424163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.424348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.424357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.424363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.424372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.436063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.436431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.436448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.436455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.436621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.436788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.436796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.436802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.436808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 5744.00 IOPS, 22.44 MiB/s [2024-11-20T09:05:49.651Z] [2024-11-20 10:05:49.448844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.449272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.449295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.449454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.449612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.449619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.449625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.449631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.461684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.462102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.462118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.462125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.462307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.462474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.462482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.462488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.462494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.474503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.474829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.474844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.474851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.475009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.475166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.475173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.475179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.475185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.487431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.487796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.487840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.487863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.488376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.488549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.488558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.488564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.488570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.500194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.500596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.500611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.500618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.500775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.500933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.069 [2024-11-20 10:05:49.500941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.069 [2024-11-20 10:05:49.500946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.069 [2024-11-20 10:05:49.500952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.069 [2024-11-20 10:05:49.512912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.069 [2024-11-20 10:05:49.513320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 10:05:49.513336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.069 [2024-11-20 10:05:49.513343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.069 [2024-11-20 10:05:49.513513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.069 [2024-11-20 10:05:49.513679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.513687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.513693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.513699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.525647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.526067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.526082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.526089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.526269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.526436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.526444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.526451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.526457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.538419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.538841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.538859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.538866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.539033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.539200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.539217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.539223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.539230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.551576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.552005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.552022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.552030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.552208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.552382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.552394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.552401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.552407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.564368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.564817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.564857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.564883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.565435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.565603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.565611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.565617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.565623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.577132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.577561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.577607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.577631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.578221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.578658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.578666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.578673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.578679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.590060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.590500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.590516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.590522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.590689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.590855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.590863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.590869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.590878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.602890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.603273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.603323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.603347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.603924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.604522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.604549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.070 [2024-11-20 10:05:49.604570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.070 [2024-11-20 10:05:49.604589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.070 [2024-11-20 10:05:49.615778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.070 [2024-11-20 10:05:49.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 10:05:49.616237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.070 [2024-11-20 10:05:49.616261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.070 [2024-11-20 10:05:49.616767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.070 [2024-11-20 10:05:49.616933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.070 [2024-11-20 10:05:49.616941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.071 [2024-11-20 10:05:49.616948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.071 [2024-11-20 10:05:49.616954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.071 [2024-11-20 10:05:49.628558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.071 [2024-11-20 10:05:49.628969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 10:05:49.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.071 [2024-11-20 10:05:49.628992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.071 [2024-11-20 10:05:49.629149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.071 [2024-11-20 10:05:49.629333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.071 [2024-11-20 10:05:49.629342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.071 [2024-11-20 10:05:49.629348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.071 [2024-11-20 10:05:49.629354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.071 [2024-11-20 10:05:49.641404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.071 [2024-11-20 10:05:49.641864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 10:05:49.641908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.071 [2024-11-20 10:05:49.641931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.071 [2024-11-20 10:05:49.642422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.071 [2024-11-20 10:05:49.642594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.071 [2024-11-20 10:05:49.642603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.071 [2024-11-20 10:05:49.642608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.071 [2024-11-20 10:05:49.642615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.332 [2024-11-20 10:05:49.654530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.332 [2024-11-20 10:05:49.654956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.332 [2024-11-20 10:05:49.654973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.332 [2024-11-20 10:05:49.654980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.332 [2024-11-20 10:05:49.655138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.332 [2024-11-20 10:05:49.655321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.332 [2024-11-20 10:05:49.655330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.332 [2024-11-20 10:05:49.655337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.655343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.667249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.667664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.667680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.667687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.667845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.668002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.668010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.668015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.668021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.680019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.680458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.680502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.680526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.681111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.681695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.681704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.681711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.681717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.692723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.693140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.693156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.693163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.693347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.693514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.693523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.693529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.693535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.705574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.705902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.705918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.705925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.706082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.706263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.706271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.706278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.706284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.718370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.718699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.718714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.718721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.718878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.719036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.719047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.719053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.719059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.731116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.731480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.731496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.731504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.731670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.731837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.731845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.731851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.731857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.744144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.744491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.744516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.744686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.744857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.744866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.744872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.744878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.757162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.757612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.757629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.757637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.757807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.757979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.757987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.757994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.758003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.770135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.770483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.770499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.770506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.770673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.333 [2024-11-20 10:05:49.770839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.333 [2024-11-20 10:05:49.770847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.333 [2024-11-20 10:05:49.770853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.333 [2024-11-20 10:05:49.770859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.333 [2024-11-20 10:05:49.782882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.333 [2024-11-20 10:05:49.783218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.333 [2024-11-20 10:05:49.783234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.333 [2024-11-20 10:05:49.783241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.333 [2024-11-20 10:05:49.783398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.783555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.783563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.783568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.783574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.795671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.796123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.796131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.796321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.796494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.796502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.796508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.796515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.808719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.809153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.809174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.809185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.809364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.809537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.809545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.809552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.809558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.821487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.821856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.821872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.821879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.822046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.822219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.822228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.822235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.822241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.834227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.834636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.834652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.834658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.834816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.834979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.834987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.834993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.834999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.847059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.847489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.847506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.847513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.847682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.847848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.847856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.847863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.847869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.859779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.860193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.860214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.860221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.860404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.860571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.860579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.860585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.860591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.872516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.872949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.872966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.872973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.873130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.873313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.873322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.873328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.873334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.885327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.885739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.885754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.885761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.885919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.886077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.886087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.886093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.886099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.334 [2024-11-20 10:05:49.898065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.334 [2024-11-20 10:05:49.898516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.334 [2024-11-20 10:05:49.898560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.334 [2024-11-20 10:05:49.898583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.334 [2024-11-20 10:05:49.899127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.334 [2024-11-20 10:05:49.899300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.334 [2024-11-20 10:05:49.899309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.334 [2024-11-20 10:05:49.899315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.334 [2024-11-20 10:05:49.899321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.595 [2024-11-20 10:05:49.911042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.595 [2024-11-20 10:05:49.911483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.595 [2024-11-20 10:05:49.911500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.595 [2024-11-20 10:05:49.911507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.595 [2024-11-20 10:05:49.911673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.595 [2024-11-20 10:05:49.911840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.595 [2024-11-20 10:05:49.911848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.595 [2024-11-20 10:05:49.911854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.595 [2024-11-20 10:05:49.911860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.595 [2024-11-20 10:05:49.923758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.595 [2024-11-20 10:05:49.924065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.595 [2024-11-20 10:05:49.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.595 [2024-11-20 10:05:49.924088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.595 [2024-11-20 10:05:49.924267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.595 [2024-11-20 10:05:49.924434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.595 [2024-11-20 10:05:49.924442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.595 [2024-11-20 10:05:49.924448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.595 [2024-11-20 10:05:49.924454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.595 [2024-11-20 10:05:49.936470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.595 [2024-11-20 10:05:49.936883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.595 [2024-11-20 10:05:49.936899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.595 [2024-11-20 10:05:49.936906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.595 [2024-11-20 10:05:49.937072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.595 [2024-11-20 10:05:49.937244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.595 [2024-11-20 10:05:49.937253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.595 [2024-11-20 10:05:49.937259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.595 [2024-11-20 10:05:49.937265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.595 [2024-11-20 10:05:49.949297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.595 [2024-11-20 10:05:49.949750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.595 [2024-11-20 10:05:49.949796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.595 [2024-11-20 10:05:49.949821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.595 [2024-11-20 10:05:49.950253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.595 [2024-11-20 10:05:49.950422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.595 [2024-11-20 10:05:49.950430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.595 [2024-11-20 10:05:49.950436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.595 [2024-11-20 10:05:49.950442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2808689 Killed "${NVMF_APP[@]}" "$@" 00:27:16.595 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.596 [2024-11-20 10:05:49.962356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:49.962721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:49.962737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:49.962745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:49.962916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:49.963089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:49.963097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:49.963108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:49.963115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2809878 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2809878 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2809878 ']' 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.596 10:05:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.596 [2024-11-20 10:05:49.975400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:49.975751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:49.975768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:49.975776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:49.975947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:49.976118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:49.976126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:49.976132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:49.976139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:49.988418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:49.988819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:49.988835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:49.988842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:49.989013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:49.989185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:49.989193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:49.989199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:49.989212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.001581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.001958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.001977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:50.001987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:50.002200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:50.002650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:50.002671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:50.002692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:50.002719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.013408] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:16.596 [2024-11-20 10:05:50.013448] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.596 [2024-11-20 10:05:50.014815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.015129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.015149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:50.015158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:50.015347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:50.015533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:50.015542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:50.015550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:50.015557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.028103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.028588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.028643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:50.028664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:50.028963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:50.029291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:50.029370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:50.029418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:50.029441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.041185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.041633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.041650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:50.041657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:50.041828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:50.041999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:50.042008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:50.042014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:50.042020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.054148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.054598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.054617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-11-20 10:05:50.054626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.596 [2024-11-20 10:05:50.054798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.596 [2024-11-20 10:05:50.054970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.596 [2024-11-20 10:05:50.054979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.596 [2024-11-20 10:05:50.054986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.596 [2024-11-20 10:05:50.054992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.596 [2024-11-20 10:05:50.067248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.596 [2024-11-20 10:05:50.067665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-11-20 10:05:50.067683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.067691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.067868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.068041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.068050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.068057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.068064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.080316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.080749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.080766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.080778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.080949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.081120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.081129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.081135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.081141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.093275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.093703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.093720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.093728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.093899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.094071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.094079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.094086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.094093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.094583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.597 [2024-11-20 10:05:50.106233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.106522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.106543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.106552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.106725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.106899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.106907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.106914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.106921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.119212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.119569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.119588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.119596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.119770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.119946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.119955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.119961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.119968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.132262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.132582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.132599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.132606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.132793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.132967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.132975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.132982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.132988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.137867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.597 [2024-11-20 10:05:50.137892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.597 [2024-11-20 10:05:50.137899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.597 [2024-11-20 10:05:50.137905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.597 [2024-11-20 10:05:50.137910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.597 [2024-11-20 10:05:50.139233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.597 [2024-11-20 10:05:50.139291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.597 [2024-11-20 10:05:50.139292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.597 [2024-11-20 10:05:50.145308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.145688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.145708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.145717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.145890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.146062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.146071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.146078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.146085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.158376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.158674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.158694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.158702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.158875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.597 [2024-11-20 10:05:50.159048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.597 [2024-11-20 10:05:50.159056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.597 [2024-11-20 10:05:50.159063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.597 [2024-11-20 10:05:50.159070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.597 [2024-11-20 10:05:50.171688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.597 [2024-11-20 10:05:50.172072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.597 [2024-11-20 10:05:50.172092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.597 [2024-11-20 10:05:50.172102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.597 [2024-11-20 10:05:50.172290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.858 [2024-11-20 10:05:50.172474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.858 [2024-11-20 10:05:50.172482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.858 [2024-11-20 10:05:50.172490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.858 [2024-11-20 10:05:50.172498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.858 [2024-11-20 10:05:50.184719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.858 [2024-11-20 10:05:50.185071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.858 [2024-11-20 10:05:50.185090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.858 [2024-11-20 10:05:50.185098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.858 [2024-11-20 10:05:50.185277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.858 [2024-11-20 10:05:50.185451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.858 [2024-11-20 10:05:50.185459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.858 [2024-11-20 10:05:50.185466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.858 [2024-11-20 10:05:50.185473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.858 [2024-11-20 10:05:50.197758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.858 [2024-11-20 10:05:50.198061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.858 [2024-11-20 10:05:50.198079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.858 [2024-11-20 10:05:50.198092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.858 [2024-11-20 10:05:50.198270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.858 [2024-11-20 10:05:50.198443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.858 [2024-11-20 10:05:50.198452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.858 [2024-11-20 10:05:50.198459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.858 [2024-11-20 10:05:50.198466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.858 [2024-11-20 10:05:50.210764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.858 [2024-11-20 10:05:50.211106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.858 [2024-11-20 10:05:50.211124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.858 [2024-11-20 10:05:50.211132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.858 [2024-11-20 10:05:50.211311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.858 [2024-11-20 10:05:50.211484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.858 [2024-11-20 10:05:50.211492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.858 [2024-11-20 10:05:50.211498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.858 [2024-11-20 10:05:50.211504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.858 [2024-11-20 10:05:50.223791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.858 [2024-11-20 10:05:50.224197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.858 [2024-11-20 10:05:50.224219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.858 [2024-11-20 10:05:50.224226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.858 [2024-11-20 10:05:50.224397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.858 [2024-11-20 10:05:50.224568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.858 [2024-11-20 10:05:50.224577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.858 [2024-11-20 10:05:50.224583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.858 [2024-11-20 10:05:50.224590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.858 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.858 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:16.858 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.858 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.858 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.859 [2024-11-20 10:05:50.236877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.237219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.237242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.237251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.237421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.237592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.237600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.237606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.237612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 [2024-11-20 10:05:50.249897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.250236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.250254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.250262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.250434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.250605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.250613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.250620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.250626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 [2024-11-20 10:05:50.262915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.263199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.263222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.263229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.263400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.263573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.263582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.263588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.263594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.859 [2024-11-20 10:05:50.275058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.859 [2024-11-20 10:05:50.275872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.276213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.276230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.276237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.276409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.276581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.276589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.276596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.276602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.859 [2024-11-20 10:05:50.288895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.289246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.289264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.289271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.289444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.289615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.289623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.289629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.289635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 [2024-11-20 10:05:50.301911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.302360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.302377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.302385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.302556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.302728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.302736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.302743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.302749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 Malloc0 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.859 [2024-11-20 10:05:50.314877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.315319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.315338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.315346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.315521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.315699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.315709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.315716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.315722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.859 [2024-11-20 10:05:50.327830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.859 [2024-11-20 10:05:50.328263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.859 [2024-11-20 10:05:50.328281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12500 with addr=10.0.0.2, port=4420 00:27:16.859 [2024-11-20 10:05:50.328289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12500 is same with the state(6) to be set 00:27:16.859 [2024-11-20 10:05:50.328461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12500 (9): Bad file descriptor 00:27:16.859 [2024-11-20 10:05:50.328632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:16.859 [2024-11-20 10:05:50.328640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:16.859 [2024-11-20 10:05:50.328647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:16.859 [2024-11-20 10:05:50.328653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:16.859 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.860 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.860 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.860 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.860 [2024-11-20 10:05:50.334438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.860 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.860 10:05:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2808953 00:27:16.860 [2024-11-20 10:05:50.340786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:16.860 [2024-11-20 10:05:50.371311] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:18.055 4910.33 IOPS, 19.18 MiB/s [2024-11-20T09:05:52.573Z] 5835.71 IOPS, 22.80 MiB/s [2024-11-20T09:05:53.510Z] 6543.00 IOPS, 25.56 MiB/s [2024-11-20T09:05:54.887Z] 7070.33 IOPS, 27.62 MiB/s [2024-11-20T09:05:55.824Z] 7511.30 IOPS, 29.34 MiB/s [2024-11-20T09:05:56.762Z] 7850.45 IOPS, 30.67 MiB/s [2024-11-20T09:05:57.699Z] 8147.25 IOPS, 31.83 MiB/s [2024-11-20T09:05:58.636Z] 8396.77 IOPS, 32.80 MiB/s [2024-11-20T09:05:59.573Z] 8608.21 IOPS, 33.63 MiB/s [2024-11-20T09:05:59.573Z] 8796.00 IOPS, 34.36 MiB/s 00:27:25.991 Latency(us) 00:27:25.991 [2024-11-20T09:05:59.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:25.991 Verification LBA range: start 0x0 length 0x4000 00:27:25.991 Nvme1n1 : 15.01 8797.17 34.36 11125.11 0.00 6405.57 427.15 18724.57 00:27:25.991 [2024-11-20T09:05:59.573Z] =================================================================================================================== 00:27:25.991 [2024-11-20T09:05:59.573Z] Total : 8797.17 34.36 11125.11 0.00 6405.57 427.15 18724.57 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.251 rmmod nvme_tcp 00:27:26.251 rmmod nvme_fabrics 00:27:26.251 rmmod nvme_keyring 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2809878 ']' 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2809878 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2809878 ']' 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2809878 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809878 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809878' 00:27:26.251 killing process with pid 2809878 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2809878 00:27:26.251 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2809878 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.510 10:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.047 00:27:29.047 real 0m26.264s 00:27:29.047 user 1m1.436s 00:27:29.047 sys 0m6.855s 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:29.047 ************************************ 00:27:29.047 END TEST nvmf_bdevperf 00:27:29.047 ************************************ 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.047 ************************************ 00:27:29.047 START TEST nvmf_target_disconnect 00:27:29.047 ************************************ 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:29.047 * Looking for test storage... 00:27:29.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.047 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.048 --rc genhtml_branch_coverage=1 00:27:29.048 --rc genhtml_function_coverage=1 00:27:29.048 --rc genhtml_legend=1 00:27:29.048 --rc geninfo_all_blocks=1 00:27:29.048 --rc geninfo_unexecuted_blocks=1 00:27:29.048 00:27:29.048 ' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.048 --rc genhtml_branch_coverage=1 00:27:29.048 --rc genhtml_function_coverage=1 00:27:29.048 --rc genhtml_legend=1 00:27:29.048 --rc geninfo_all_blocks=1 00:27:29.048 --rc geninfo_unexecuted_blocks=1 00:27:29.048 00:27:29.048 ' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.048 --rc genhtml_branch_coverage=1 00:27:29.048 --rc genhtml_function_coverage=1 00:27:29.048 --rc genhtml_legend=1 00:27:29.048 --rc geninfo_all_blocks=1 00:27:29.048 --rc geninfo_unexecuted_blocks=1 00:27:29.048 00:27:29.048 ' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.048 --rc genhtml_branch_coverage=1 00:27:29.048 --rc genhtml_function_coverage=1 00:27:29.048 --rc genhtml_legend=1 00:27:29.048 --rc geninfo_all_blocks=1 00:27:29.048 --rc geninfo_unexecuted_blocks=1 00:27:29.048 00:27:29.048 ' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.048 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.049 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.049 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:29.049 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:29.049 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.049 10:06:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:35.621 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:35.621 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:35.621 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:35.622 Found net devices under 0000:86:00.0: cvl_0_0 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:35.622 Found net devices under 0000:86:00.1: cvl_0_1 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.622 10:06:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:35.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:27:35.622 00:27:35.622 --- 10.0.0.2 ping statistics --- 00:27:35.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.622 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:35.622 00:27:35.622 --- 10.0.0.1 ping statistics --- 00:27:35.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.622 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:35.622 ************************************ 00:27:35.622 START TEST nvmf_target_disconnect_tc1 00:27:35.622 ************************************ 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:35.622 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.622 [2024-11-20 10:06:08.380131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-11-20 10:06:08.380179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabcab0 with addr=10.0.0.2, port=4420 00:27:35.622 [2024-11-20 10:06:08.380196] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:35.623 [2024-11-20 10:06:08.380214] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:35.623 [2024-11-20 10:06:08.380221] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:35.623 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:35.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:35.623 Initializing NVMe Controllers 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:35.623 00:27:35.623 real 0m0.122s 00:27:35.623 user 0m0.049s 00:27:35.623 sys 0m0.072s 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 ************************************ 00:27:35.623 END TEST nvmf_target_disconnect_tc1 00:27:35.623 ************************************ 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 ************************************ 00:27:35.623 START TEST nvmf_target_disconnect_tc2 00:27:35.623 ************************************ 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2815553 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2815553 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2815553 ']' 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 [2024-11-20 10:06:08.521751] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:35.623 [2024-11-20 10:06:08.521797] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.623 [2024-11-20 10:06:08.602186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.623 [2024-11-20 10:06:08.644244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.623 [2024-11-20 10:06:08.644281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.623 [2024-11-20 10:06:08.644288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.623 [2024-11-20 10:06:08.644294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.623 [2024-11-20 10:06:08.644299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.623 [2024-11-20 10:06:08.646009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:35.623 [2024-11-20 10:06:08.646143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:35.623 [2024-11-20 10:06:08.646252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:35.623 [2024-11-20 10:06:08.646252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 Malloc0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 [2024-11-20 10:06:08.804188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 [2024-11-20 10:06:08.829150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2815581 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:35.623 10:06:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:37.539 10:06:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2815553 00:27:37.539 10:06:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 [2024-11-20 10:06:10.855926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 [2024-11-20 10:06:10.856130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 [2024-11-20 10:06:10.856322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Read completed with error (sct=0, sc=8) 00:27:37.539 starting I/O failed 00:27:37.539 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Write completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 Read completed with error (sct=0, sc=8) 00:27:37.540 starting I/O failed 00:27:37.540 [2024-11-20 10:06:10.856517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.540 [2024-11-20 10:06:10.856726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.856748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.856824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.856835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.857043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.857055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.857149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.857159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.861213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.861238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.861484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.861499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.861724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.861823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.861833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.861934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.861943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.862039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.862052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.862197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.862215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.862381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.862392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.862602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.862613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.862745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.862755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.863004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.863015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.863215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.863226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.863378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.863389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.863621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.863654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.863926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.863957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.864170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.864209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.864353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.864385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.864648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.864679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.864885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.864917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.865124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.865157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.540 [2024-11-20 10:06:10.865276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.540 [2024-11-20 10:06:10.865309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.540 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.865440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.865462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.865539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.865559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.865826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.865848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.866079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.866100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.866283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.866306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.866479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.866500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.866685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.866952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.867200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.867226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.867396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.867417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.867653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.867674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.867785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.867807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.867982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.868003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.868270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.868293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.868472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.868494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.868655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.868676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.868918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.868939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.869128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.869149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.869368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.869391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.869583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.869605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.869722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.869745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.869983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.870373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.870603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.870742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.870915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.870937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.871119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.871141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.871314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.871337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.871737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.871759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.872036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.872067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.872225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.872259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.872447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.872478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.872660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.872681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.541 [2024-11-20 10:06:10.872942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.541 [2024-11-20 10:06:10.872963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.541 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.873127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.873147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.873382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.873416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.873614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.873645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.873908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.873940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.874154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.874176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.874295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.874317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.874532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.874553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.874722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.874743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.874930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.874962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.875133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.875164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.875398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.875576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.875598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.875710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.875731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.875912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.875934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.876310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.876477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.876820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.876988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.877151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.877360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.877570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.877744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.877949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.877980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.878117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.878149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.878336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.878358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.878480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.878610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.878635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.878859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.878880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.879190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.879447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.879479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.879716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.880024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.880056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.880259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.880293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.542 [2024-11-20 10:06:10.880562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.542 [2024-11-20 10:06:10.880594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.542 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.880718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.880750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.880920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.880951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.881167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.881200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.881387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.881410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.881647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.881668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.881951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.881973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.882161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.882303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.882541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.882562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.882748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.882770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.882939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.882960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.883231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.883253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.883366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.883388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.883553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.883574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.883678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.883700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.883880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.883902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.884032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.884234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.884381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.884574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.884816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.884980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.885166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.885375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.885547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.885684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.885962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.885983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.886102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.886123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.886325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.886347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.886525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.886547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.886644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.543 [2024-11-20 10:06:10.886666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.543 qpair failed and we were unable to recover it. 00:27:37.543 [2024-11-20 10:06:10.886790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.886811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.886978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.887004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.887185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.887213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.887411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.887432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.887602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.887623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.887823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.887844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.888043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.888065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.888166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.888188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.888406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.888427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.888586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.888607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.888842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.888873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.889168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.889200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.889467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.889498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.889714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.889745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.890028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.890058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.890299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.890333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.890520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.890552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.890776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.890886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.890908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.891148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.891169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.891420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.891459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.891726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.891758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.892046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.892077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.892194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.892239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.892510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.892541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.892646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.892677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.892848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.892870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.893022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.893043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.893153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.893178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.893413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.893436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.893630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.893650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.893935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.893957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.894079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.894100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.894320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.894343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.544 [2024-11-20 10:06:10.894508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.544 [2024-11-20 10:06:10.894529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.544 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.894689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.894711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.894973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.894994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.895188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.895232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.895364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.895395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.895593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.895625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.895838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.896055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.896087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.896334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.896367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.896544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.896771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.896792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.896972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.896994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.897226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.897259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.897454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.897486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.897755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.897786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.898080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.898111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.898326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.898349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.898520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.898541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.898716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.898748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.898885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.898917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.899174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.899215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.899401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.899423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.899601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.899633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.899932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.899963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.900096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.900128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.900311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.900334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.900557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.900589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.900856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.900889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.901027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.901058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.901235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.901278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.901376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.901398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.901573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.901594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.901811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.901833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.902010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.902232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.902277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.902540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.902570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.902689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.545 [2024-11-20 10:06:10.902721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.545 qpair failed and we were unable to recover it. 00:27:37.545 [2024-11-20 10:06:10.902851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.902882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.903069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.903101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.903269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.903291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.903473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.903505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.903612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.903642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.903777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.903809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.904878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.904899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.905817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.905839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.906929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.906950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.907915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.907936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.546 [2024-11-20 10:06:10.908802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.546 [2024-11-20 10:06:10.908823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.546 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.908977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.909920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.909940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.910925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.910947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.911181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.911212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.911295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.911316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.911488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.911510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.911681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.911703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.911875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.911896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.912934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.912956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.913085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.913451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.913687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.913849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.913970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.914002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.914246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.914289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.914388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.914410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.914588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.914609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.914763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.914784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.547 [2024-11-20 10:06:10.915048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.547 [2024-11-20 10:06:10.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.547 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.915353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.915386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.915580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.915610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.915740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.915771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.915956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.915993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.916257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.916290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.916397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.916428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.916700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.916733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.916906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.916937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.917119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.917151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.917373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.917406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.917705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.917975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.918006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.918275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.918308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.918602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.918633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.918804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.918825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.919033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.919195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.919422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.919639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.919884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.919998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.920019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.920187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.920217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.920506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.920538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.920829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.920860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.921051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.921083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.921274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.921309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.921490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.921511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.921750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.921782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.921977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.922008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.922135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.922167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.922305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.922338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.922522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.922554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.548 qpair failed and we were unable to recover it. 00:27:37.548 [2024-11-20 10:06:10.922731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.548 [2024-11-20 10:06:10.922762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.922881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.923079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.923300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.923432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.923619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.923887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.923990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.924011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.924163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.924184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.924400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.924579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.924610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.924854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.924892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.925089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.925121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.925365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.925399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.925515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.925546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.925734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.925754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.925863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.925885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.926129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.926150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.926321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.926344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.926534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.926564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.926681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.926713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.926896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.926928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.927148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.927179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.927363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.927394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.927578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.927610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.927789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.927820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.928072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.928104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.928241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.928274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.928403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.928433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.928697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.928728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.928922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.928953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.929060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.929092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.929302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.929336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.929446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.929478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.929672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.929974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.930004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.930174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.930228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.549 qpair failed and we were unable to recover it. 00:27:37.549 [2024-11-20 10:06:10.930382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.549 [2024-11-20 10:06:10.930404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.930634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.930797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.930829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.931845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.931866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.932875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.932901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.933142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.933173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.933358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.933391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.933581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.933613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.933737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.933758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.933923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.933944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.934111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.934132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.934290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.934312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.934524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.934554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.934738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.934769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.935040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.935159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.935191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.935386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.935419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.935660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.935691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.935873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.935905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.936093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.936125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.936238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.936271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.936537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.936568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.936688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.936710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.936871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.936892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.937111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.937133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.937298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.937320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.937508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.937530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.550 [2024-11-20 10:06:10.937688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.550 [2024-11-20 10:06:10.937710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.550 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.937806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.937828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.937939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.937961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.938199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.938384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.938406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.938600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.938631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.938809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.938840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.938957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.938988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.939122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.939153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.939433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.939630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.939795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.939816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.939964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.939986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.940225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.940248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.940368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.940389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.940611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.940654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.940840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.940871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.940986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.941248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.941475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.941671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.941798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.941969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.941990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.942152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.942172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.942367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.942400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.942572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.942753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.942784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.942905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.942936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.943119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.943151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.943343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.943365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.943465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.943658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.943680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.943848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.943870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.944091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.944111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.944259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.944283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.944546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.551 [2024-11-20 10:06:10.944567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.551 qpair failed and we were unable to recover it. 00:27:37.551 [2024-11-20 10:06:10.944728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.944750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.944938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.944970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.945239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.945478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.945523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.945747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.945768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.946941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.946972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.947210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.947243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.947462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.947483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.947670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.947692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.947910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.947941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.948062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.948092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.948335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.948369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.948551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.948572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.948689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.948711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.948889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.948910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.949154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.949186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.949333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.949371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.949543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.949575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.949758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.949789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.949962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.949994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.950256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.950289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.950395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.950426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.950637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.950669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.950793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.950815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.950927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.950948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.951098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.951119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.951336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.951358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.951453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.951472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.951638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.951659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.955419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.955455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.955742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.955773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.552 [2024-11-20 10:06:10.955982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.552 [2024-11-20 10:06:10.956014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.552 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.956256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.956290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.956467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.956497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.956684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.956716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.956901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.956933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.957057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.957089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.957227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.957502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.957533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.957659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.957690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.957931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.957962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.958169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.958210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.958341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.958372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.958582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.958655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.958806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.958979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.959218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.959377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.959551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.959797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.959961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.959993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.960256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.960288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.960423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.960455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.960642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.960674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.960891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.960924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.961096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.961128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.961390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.961425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.961638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.961670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.961882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.961914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.962031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.962063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.962196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.962240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.962427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.962459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.962628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.962660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.962833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.962864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.963057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.553 [2024-11-20 10:06:10.963089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.553 qpair failed and we were unable to recover it. 00:27:37.553 [2024-11-20 10:06:10.963282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.963316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.963502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.963535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.963724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.963756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.963886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.963918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.964054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.964086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.964267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.964307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.964458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.964697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.964730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.964923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.964955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.965167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.965199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.965467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.965499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.965778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.965810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.966069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.966101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.966283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.966316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.966504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.966537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.966729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.966760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.966970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.967002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.967215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.967247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.967352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.967385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.967508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.967540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.967782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.967814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.967999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.968230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.968398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.968565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.968713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.968939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.968971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.969158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.969419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.969451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.969712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.969744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.969953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.969986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.970187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.970228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.970509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.970547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.970781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.970813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.971093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.971124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.971254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.971287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.971439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.971644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.554 [2024-11-20 10:06:10.971676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.554 qpair failed and we were unable to recover it. 00:27:37.554 [2024-11-20 10:06:10.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.971892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.972060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.972283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.972316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.972504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.972536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.972717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.972750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.972931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.972963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.973085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.973117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.973255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.973287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.973532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.973564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.973738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.973770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.973956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.973987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.974169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.974210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.974410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.974441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.974656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.974688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.974923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.975232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.975264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.975388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.975419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.975552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.975717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.975749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.975920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.975951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.976159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.976393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.976426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.976621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.976652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.976847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.976879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.977037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.977181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.977368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.977591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.977808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.977991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.978023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.978264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.978298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.978506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.978538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.978711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.978926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.978957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.979131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.979162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.979345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.979383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.555 qpair failed and we were unable to recover it. 00:27:37.555 [2024-11-20 10:06:10.979501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.555 [2024-11-20 10:06:10.979533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.979669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.979699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.979968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.980000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.980171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.980209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.980314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.980346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.980590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.980623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.980861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.980893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.981052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.981286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.981416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.981658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.981825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.981997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.982029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.982279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.982313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.982516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.982547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.982674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.982901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.982932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.983114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.983146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.983430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.983608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.983640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.983772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.983804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.983996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.984221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.984430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.984581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.984751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.984905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.984936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.985265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.985453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.985485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.985602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.985634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.985900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.985932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.986121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.986153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.986344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.986613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.986645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.986749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.986781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.987029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.987060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.987298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.556 [2024-11-20 10:06:10.987332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.556 qpair failed and we were unable to recover it. 00:27:37.556 [2024-11-20 10:06:10.987585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.987615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.987808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.987840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.988086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.988118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.988252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.988285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.988541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.988572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.988845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.988877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.989135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.989166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.989366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.989595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.989627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.989762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.989793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.990035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.990067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.990327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.990360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.990472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.990503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.990674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.990706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.990992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.991024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.991275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.991308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.991517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.991760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.991793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.991908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.991939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.992078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.992110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.992291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.992324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.992436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.992666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.992882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.992912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.993014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.993046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.993255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.993288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.993496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.993528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.993644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.993676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.993875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.993906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.994089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.994121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.994312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.994350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.994475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.994507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.994767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.994800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.995008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.995038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.557 [2024-11-20 10:06:10.995240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.557 [2024-11-20 10:06:10.995274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.557 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.995486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.995517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.995650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.995682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.995955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.996249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.996282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.996530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.996562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.996683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.996714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.996895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.996926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.997102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.997133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.997326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.997359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.997554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.997711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.997743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.998009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.998041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.998243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.998275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.998459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.998491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.998776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.998808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.999067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.999340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.999374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.999508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.999716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.999749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:10.999936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:10.999967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.000140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.000172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.000429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.000695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.000726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.000919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.000952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.001166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.001198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.001496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.001528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.001705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.001737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.001935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.001966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.002097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.002129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.002316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.002350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.558 [2024-11-20 10:06:11.002469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.558 [2024-11-20 10:06:11.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.558 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.002763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.002938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.002969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.003152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.003184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.003327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.003358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.003554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.003585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.003774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.003807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.003989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.004021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.004216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.004249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.004431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.004463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.004652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.004684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.004859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.004891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.005088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.005259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.005293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.005519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.005795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.005827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.006046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.006196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.006427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.006645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.006865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.006991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.007022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.007234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.007515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.007547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.007652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.007684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.007871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.007902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.008031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.008062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.008321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.008354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.008477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.008508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.008636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.008667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.008869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.008900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.009146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.009179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.009330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.009362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.009579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.009780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.009811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.559 [2024-11-20 10:06:11.010053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.559 [2024-11-20 10:06:11.010085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.559 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.010269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.010303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.010490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.010521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.010722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.010940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.011143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.011175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.011368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.011400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.011577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.011608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.011811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.011843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.012011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.012044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.012296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.012328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.012539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.012571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.012819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.012852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.013085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.013116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.013303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.013336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.013609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.013641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.013816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.013848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.014023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.014053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.014169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.014212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.014431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.014462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.014643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.014675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.014928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.014960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.015236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.015269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.015457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.015489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.015679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.015710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.015913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.015945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.016221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.016253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.016521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.016751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.016782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.016916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.016948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.017129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.017458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.017680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.017711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.017902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.017935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.018221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.018254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.018437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.018468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.018653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.560 [2024-11-20 10:06:11.018684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.560 qpair failed and we were unable to recover it. 00:27:37.560 [2024-11-20 10:06:11.018819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.018852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.018989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.019196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.019246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.019368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.019408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.019581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.019613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.019851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.019883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.020134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.020165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.020456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.020489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.020658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.020690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.020959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.021224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.021258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.021436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.021467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.021661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.021692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.021816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.021849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.021961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.021993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.022176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.022215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.022486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.022518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.022632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.022665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.022876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.022907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.023093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.023125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.023382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.023415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.023622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.023653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.023891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.023923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.024038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.024069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.024199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.024239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.024453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.024484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.024677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.024710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.024893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.024924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.025095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.025127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.025367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.025405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.025605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.025637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.025881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.025912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.026162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.026194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.026511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.026544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.026831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.026862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.561 qpair failed and we were unable to recover it. 00:27:37.561 [2024-11-20 10:06:11.027054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.561 [2024-11-20 10:06:11.027086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.027276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.027308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.027551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.027582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.027853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.027885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.028096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.028127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.028378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.028411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.028652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.028683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.028925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.028957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.029232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.029325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.029564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.029600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.029811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.029845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.030107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.030139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.030283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.030520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.030552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.030685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.030716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.030834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.030866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.031126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.031159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.031378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.031411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.031537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.031569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.031708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.031740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.031868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.031901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.032130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.032336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.032370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.032618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.032651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.032785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.032817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.033032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.033064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.033330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.033364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.033554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.033831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.033863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.033996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.034028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.034259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.034292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.034471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.034501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.034697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.034728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.034912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.034941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.035179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.562 [2024-11-20 10:06:11.035219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.562 qpair failed and we were unable to recover it. 00:27:37.562 [2024-11-20 10:06:11.035491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.035522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.035698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.035728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.035917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.036238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.036268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.036467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.036722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.036752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.036890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.036918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.037089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.037119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.037293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.037324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.037527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.037556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.037868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.038086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.038134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.038293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.038339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.038569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.038614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.038797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.038865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.039074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.039109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.039378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.039411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.039608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.039851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.039891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.040080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.040125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.040294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.040343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.040557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.040604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.040843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.041108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.041158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.041366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.041635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.041802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.041849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.042125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.042160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.042383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.042417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.042598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.042630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.563 qpair failed and we were unable to recover it. 00:27:37.563 [2024-11-20 10:06:11.042888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.563 [2024-11-20 10:06:11.042923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.043112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.043144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.043294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.043328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.043517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.043551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.043737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.043770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.043943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.043974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.044226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.044260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.044484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.044519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.044706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.044739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.044920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.044951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.045198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.045257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.045503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.045536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.045749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.045782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.046037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.046072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.046339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.046374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.046616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.046648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.046842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.046876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.047016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.047048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.047230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.047264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.047389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.047421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.047681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.047715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.047957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.047989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.048169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.048217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.048434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.048709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.048741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.048924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.048957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.049228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.049264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.049453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.049747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.049779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.050002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.050037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.050275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.050308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.050558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.050590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.050714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.050748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.050873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.050905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.051130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.051314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.564 [2024-11-20 10:06:11.051348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.564 qpair failed and we were unable to recover it. 00:27:37.564 [2024-11-20 10:06:11.051532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.051573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.051820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.051859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.052029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.052062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.052335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.052375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.052645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.052918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.052950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.053140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.053173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.053329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.053363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.053551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.053583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.053815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.054074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.054108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.054351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.054385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.054649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.054682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.054949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.054984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.055222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.055473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.055505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.055654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.055931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.055964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.056067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.056098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.056270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.056303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.056581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.056615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.056887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.056919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.057177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.057234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.057428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.057462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.057655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.057687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.057893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.057925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.058189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.058448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.058482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.058747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.058784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.059041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.059085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.059290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.059324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.059506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.059538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.059721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.059754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.059955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.059994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.060183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.060445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.060478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.060657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.565 qpair failed and we were unable to recover it. 00:27:37.565 [2024-11-20 10:06:11.060952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.565 [2024-11-20 10:06:11.060986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.061195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.061255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.061497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.061529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.061784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.061819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.062032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.062252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.062287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.062529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.062564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.062819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.062851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.063028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.063059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.063239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.063275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.063417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.063450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.063713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.063745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.063991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.064301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.064337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.064511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.064543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.064830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.064860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.065078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.065113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.065289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.065515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.065547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.065692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.065725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.065864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.065898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.066016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.066048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.066240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.066273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.066454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.066486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.066725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.066768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.066949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.066981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.067103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.067135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.067373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.067406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.067518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.067562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.067671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.566 [2024-11-20 10:06:11.067703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.566 qpair failed and we were unable to recover it. 00:27:37.566 [2024-11-20 10:06:11.067879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.067910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.068100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.068132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.068237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.068276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.068490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.068527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.068660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.068692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.068865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.068897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.069083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.069115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.069300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.069337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.069523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.069555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.069725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.070165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.070229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.070436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.070469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.070575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.070606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.070814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.070846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.071036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.071070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.071319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.071353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.071547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.071578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.071771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.071805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.071947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.071982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.072111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.072143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.072394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.072429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.072610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.072649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.072830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.072862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.072984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.073015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.073186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.073232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.073417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.073451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.073723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.073757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.073874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.073906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.074025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.567 [2024-11-20 10:06:11.074056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.567 qpair failed and we were unable to recover it. 00:27:37.567 [2024-11-20 10:06:11.074301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.074337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.074457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.074488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.074714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.074746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.074931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.074964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.075146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.075180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.075307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.075340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.075539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.075570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.075743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.075776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.075954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.075990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.076180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.076238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.076379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.076411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.076652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.076831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.076865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.077026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.077285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.077420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.077452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.077627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.077662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.077854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.077886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.078012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.078044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.078228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.078262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.078390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.078429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.078566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.078600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.078813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.078845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.079028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.079061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.079288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.568 [2024-11-20 10:06:11.079324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.568 qpair failed and we were unable to recover it. 00:27:37.568 [2024-11-20 10:06:11.079521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.079553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.079831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.079876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.080132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.080416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.080449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.080639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.080683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.080876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.080907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.081151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.081183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.081455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.081489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.081679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.081713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.081938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.081970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.082226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.082530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.082745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.082777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.083049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.083084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.083262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.083461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.083500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.083689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.083733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.083926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.083959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.084079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.084111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.084299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.084333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.084531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.084567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.084783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.084947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.084979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.085269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.085311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.085504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.085537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.085727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.085759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.085904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.086089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.086336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.086370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.086501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.086533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.086798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.086833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.087027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.569 [2024-11-20 10:06:11.087059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.569 qpair failed and we were unable to recover it. 00:27:37.569 [2024-11-20 10:06:11.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.087216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.087420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.087455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.087573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.087606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.087719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.087751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.088018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.088050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.088292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.088329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.088535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.088567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.088784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.088816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.088990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.089030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.089237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.089272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.089532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.089565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.089684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.089719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.089867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.089900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.090107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.090139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.090329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.090362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.090625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.090660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.090906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.090938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.091143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.091175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.091386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.091422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.091573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.091752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.091784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.091916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.091959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.092143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.092178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.092360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.092393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.092504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.092537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.092744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.092776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.092988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.093148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.093179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.093399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.093433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.093557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.093589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.093862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.093897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.094003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.094160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.094192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.570 [2024-11-20 10:06:11.094507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.570 [2024-11-20 10:06:11.094542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.570 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.094682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.094714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.094938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.095125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.095159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.095397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.095433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.095619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.095652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.095895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.095927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.096106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.096140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.096323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.096357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.096566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.096596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.096724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.096756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.096951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.096992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.097116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.097148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.097404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.097437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.097619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.097657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.097956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.097989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.098121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.098152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.098354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.098388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.098646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.098687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.098885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.098917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.099033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.099065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.099271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.099515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.099550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.099793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.099825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.100014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.100047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.100179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.100249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.100436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.100470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.100756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.100789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.571 [2024-11-20 10:06:11.100910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.571 [2024-11-20 10:06:11.100941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.571 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.101079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.101113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.101271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.101442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.101475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.101745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.101778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.101900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.101939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.102064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.102096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.102307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.102340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.102555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.102757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.102791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.102900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.102932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.103116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.103374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.103408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.103596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.103630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.103874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.103906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.104080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.104112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.104232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.104267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.104385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.104419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.104681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.104714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.104908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.104939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.105112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.105143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.105278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.105315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.105513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.105548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.105733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.105765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.106004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.106263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.106299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.106471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.106504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.106698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.106729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.106929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.106962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.107175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.107222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.107504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.107536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.107677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.107715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.107896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.107930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.108155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.108187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.108464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.108496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.108631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.108669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.841 [2024-11-20 10:06:11.108888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.841 [2024-11-20 10:06:11.108924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.841 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.109106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.109139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.109394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.109428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.109551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.109585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.109852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.109884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.110090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.110122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.110311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.110352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.110544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.110579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.110765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.110797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.111015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.111048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.111299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.111334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.111599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.111902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.111935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.112189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.112259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.112506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.112538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.112795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.112830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.113020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.113052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.113175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.113222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.113378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.113413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.113607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.113640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.113845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.113877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.114114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.114146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.114334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.114376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.114558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.114806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.114838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.115080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.115273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.115308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.115483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.115515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.115753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.116004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.116195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.116241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.116441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.116653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.116687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.116892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.116926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.117102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.117134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.117324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.117357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.117571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.117607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.117734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.117766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.117939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.842 [2024-11-20 10:06:11.117971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.842 qpair failed and we were unable to recover it. 00:27:37.842 [2024-11-20 10:06:11.118172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.118226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.118519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.118708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.118740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.118854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.118886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.119077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.119111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.119381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.119415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.119539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.119571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.119816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.119858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.120064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.120100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.120259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.120293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.120478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.120510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.120698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.120732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.120905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.120937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.121104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.121271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.121312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.121454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.121487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.121625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.121655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.121832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.121863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.122060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.122093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.122307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.122340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.122510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.122542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.122648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.122679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.122894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.122930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.123166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.123200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.123353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.123393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.123517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.123550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.123733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.123770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.124030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.124062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.124187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.124250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.124450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.124494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.124630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.124666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.124905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.124939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.125122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.125155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.125354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.125391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.125644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.125678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.125989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.126029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.126218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.126252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.126422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.126455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.843 qpair failed and we were unable to recover it. 00:27:37.843 [2024-11-20 10:06:11.126646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.843 [2024-11-20 10:06:11.126680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.126973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.127258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.127293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.127417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.127449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.127685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.127721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.127971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.128004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.128191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.128250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.128446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.128483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.128665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.128699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.128919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.128952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.129221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.129259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.129366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.129399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.129512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.129545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.129727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.129768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.130048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.130084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.130217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.130251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.130500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.130534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.130785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.130823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.130962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.130998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.131127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.131160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.131416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.131451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.131655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.131867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.131903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.132088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.132121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.132386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.132421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.132625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.132661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.132936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.133153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.133188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.133483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.133521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.133709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.133742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.133874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.133908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.134168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.134211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.134415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.134451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.134640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.134673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.134805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.844 [2024-11-20 10:06:11.134838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.844 qpair failed and we were unable to recover it. 00:27:37.844 [2024-11-20 10:06:11.134949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.134982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.135177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.135232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.135351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.135384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.135505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.135539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.135661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.135694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.135985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.136029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.136181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.136259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.136458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.136491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.136678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.136711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.136895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.136929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.137143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.137178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.137415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.137449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.137664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.137977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.138012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.138198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.138247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.138433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.138466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.138654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.138692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.138970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.139004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.139223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.139259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.139433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.139472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.139656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.139865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.139900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.140139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.140171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.140451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.140487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.140672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.845 [2024-11-20 10:06:11.140706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.845 qpair failed and we were unable to recover it. 00:27:37.845 [2024-11-20 10:06:11.140895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.140928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.141137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.141170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.141563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.141652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.141831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.141884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.142093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.142141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.142380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.142431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.142684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.142732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.142937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.142984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.143233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.143284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.143516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.143568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.143832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.143867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.143980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.144011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.144251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.144286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.144581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.144614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.144802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.144835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.144964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.144997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.145241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.145290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.145531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.145579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.145798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.145844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.146002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.146049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.146348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.146399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.146684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.146732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.146893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.146940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.147152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.147199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.147419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.147459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.147661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.147695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.147907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.148089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.148123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.148328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.148363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.148540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.148573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.148764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.148798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.148975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.149008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.149280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.149330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.149634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.149682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.149846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.149904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.150074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.150120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.150414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.846 [2024-11-20 10:06:11.150464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.846 qpair failed and we were unable to recover it. 00:27:37.846 [2024-11-20 10:06:11.150600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.150648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.150921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.151268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.151310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.151550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.151740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.151773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.151988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.152152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.152185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.152382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.152415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.152598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.152631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.152809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.153033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.153077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.153314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.153363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.153608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.153656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.154002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.154166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.154226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.154477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.154698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.154746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.155041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.155088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.155236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.155289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.155541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.155575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.155768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.155801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.156081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.156115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.156383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.156418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.156609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.156642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.156840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.156873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.157064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.157380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.157429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.157599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.157648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.157871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.157918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.158067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.158116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.847 qpair failed and we were unable to recover it. 00:27:37.847 [2024-11-20 10:06:11.158364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.847 [2024-11-20 10:06:11.158414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.158685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.158734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.158932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.158979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.159198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.159458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.159492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.159675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.159837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.159870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.160067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.160107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.160294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.160328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.160529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.160561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.160725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.160758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.160941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.160985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.161186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.161251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.161608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.161836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.161883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.162136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.162321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.162371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.162592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.162639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.162909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.162957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.163249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.163302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.163467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.163515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.163829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.163877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.164105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.164154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.164423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.164554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.164588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.164834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.165132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.165164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.165365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.165399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.165640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.165673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.165883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.165924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.166067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.166115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.166351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.166400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.166629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.166677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.166911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.166958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.167083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.848 [2024-11-20 10:06:11.167138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.848 qpair failed and we were unable to recover it. 00:27:37.848 [2024-11-20 10:06:11.167372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.167422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.167648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.167943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.167990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.168252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.168305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.168581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.168630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.168928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.168972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.169264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.169309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.169524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.169569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.169771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.169814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.170030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.170074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.170325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.170544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.170589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.170851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.170894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.171168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.171224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.171515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.171559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.171787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.171831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.172090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.172397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.172434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.172622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.172653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.172892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.172924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.173130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.173161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.173363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.173395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.173500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.173531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.173767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.173798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.174057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.174103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.174300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.174346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.174496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.174539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.174801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.175044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.175088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.175377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.175422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.175635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.175677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.175960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.175997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.176173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.176212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.176333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.176364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.176593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.176624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.176792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.176823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.176942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.177151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.849 [2024-11-20 10:06:11.177183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.849 qpair failed and we were unable to recover it. 00:27:37.849 [2024-11-20 10:06:11.177320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.177352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.177524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.177571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.177757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.177801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.178019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.178062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.178280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.178324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.178550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.178598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.178775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.179036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.179077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.179307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.179348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.179560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.179593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.179710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.179738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.179847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.179877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.180976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.181081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.181110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.181231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.181271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.181474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.181514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.181705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.181745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.181942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.181982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.182165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.182222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.182372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.182411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.182686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.182727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.182864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.182904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.183095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.183129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.183321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.183351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.183465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.183493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.183671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.850 [2024-11-20 10:06:11.183701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.850 qpair failed and we were unable to recover it. 00:27:37.850 [2024-11-20 10:06:11.183931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.183959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.184131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.184270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.184298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.184532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.184561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.184737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.184774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.184964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.185006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.185295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.185542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.185792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.185832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.186042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.186082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.186277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.186551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.186586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.186721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.186750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.186863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.186892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.187071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.187100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.187236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.187265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.187434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.187462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.187713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.187954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.188068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.188096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.188327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.188370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.188504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.188543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.188761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.188999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.189042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.189257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.189305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.189520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.189564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.189834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.189877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.190072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.190117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.190425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.190461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.190645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.190677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.851 [2024-11-20 10:06:11.190888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.851 qpair failed and we were unable to recover it. 00:27:37.851 [2024-11-20 10:06:11.191126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.191158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.191372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.191600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.191631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.191753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.191784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.192052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.192098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.192303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.192348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.192507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.192549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.192813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.192858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.192988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.193030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.193192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.193271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.193547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.193590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.193808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.193855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.194111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.194143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.194330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.194362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.194599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.194630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.194921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.195113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.195143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.195324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.195355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.195560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.195604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.195873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.195926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.196162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.196218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.196428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.196473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.196618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.196661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.196862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.196906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.197172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.197227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.197513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.197740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.197772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.198014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.198136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.198167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.198445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.198477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.198730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.198763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.199020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.199069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.852 qpair failed and we were unable to recover it. 00:27:37.852 [2024-11-20 10:06:11.199353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.852 [2024-11-20 10:06:11.199403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.199563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.199888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.199935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.200217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.200489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.200536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.200830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.200878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.201174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.201262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.201490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.201540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.201785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.201834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.202071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.202120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.202341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.202392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.202599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.202798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.202844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.203000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.203048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.203353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.203401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.203652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.203686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.203880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.203912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.204090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.204123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.204312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.204346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.204572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.204689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.204722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.204834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.204866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.205049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.205097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.205308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.205357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.205595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.205643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.205800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.205847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.206059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.206101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.206369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.206428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.206584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.206846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.207044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.207092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.207314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.207368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.207555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.207588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.207704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.207738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.207945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.207978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.208081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.208114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.208386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.208420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.853 [2024-11-20 10:06:11.208566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.853 qpair failed and we were unable to recover it. 00:27:37.853 [2024-11-20 10:06:11.208803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.208834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.208964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.208995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.209122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.209167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.209350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.209394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.209538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.209582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.209818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.209861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.210003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.210047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.210270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.210315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.210466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.210509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.210658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.210700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.210899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.210941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.211080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.211127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.211261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.211296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.211469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.211499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.211599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.211627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.211862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.211892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.212105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.212136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.212339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.212372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.212492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.212522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.212646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.212676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.212847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.212887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.213121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.213166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.213419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.213464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.213723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.213768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.213967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.214010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.214295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.214339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.214497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.214540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.214687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.214733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.215002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.215036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.215259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.215445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.215475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.215681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.215711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.215897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.215927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.216107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.216138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.216258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.216290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.216586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.216736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.216780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.854 qpair failed and we were unable to recover it. 00:27:37.854 [2024-11-20 10:06:11.216996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.854 [2024-11-20 10:06:11.217039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.217178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.217232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.217441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.217768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.217808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.218083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.218393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.218589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.218755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.218898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.218997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.219964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.219992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.220176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.220213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.220339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.220368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.220586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.220613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.220795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.220823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.221035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.221076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.221330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.221372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.221576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.221615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.221852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.221895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.222055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.222095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.222246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.222287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.222501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.222541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.222683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.222722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.222881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.222923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.223182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.223234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.223376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.223416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.223653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.223692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.223876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.855 [2024-11-20 10:06:11.223919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.855 qpair failed and we were unable to recover it. 00:27:37.855 [2024-11-20 10:06:11.224195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.224247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.224450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.224489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.224699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.224739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.224890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.224930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.225177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.225443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.225485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.225754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.225794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.226057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.226101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.226313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.226356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.226628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.226669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.226822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.226863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.227126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.227173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.227399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.227441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.227720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.227760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.227964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.228006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.228267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.228319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.228586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.228627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.228896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.228939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.229175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.229222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.229440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.229475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.229670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.229910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.229947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.230154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.230190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.230399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.230436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.230687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.230722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.230846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.230874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.231060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.231085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.231186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.231224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.231481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.231662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.231687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.231804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.231830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.232007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.232030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.232156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.232347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.232373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.232615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.232651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.232786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.232821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.233006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.233041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.856 qpair failed and we were unable to recover it. 00:27:37.856 [2024-11-20 10:06:11.233157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.856 [2024-11-20 10:06:11.233190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.233415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.233452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.233592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.233634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.233834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.233989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.234023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.234232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.234269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.234488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.234720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.234745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.234904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.234928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.235087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.235111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.235358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.235384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.235484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.235509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.235809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.236065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.236090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.236289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.236326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.236465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.236500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.236657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.236694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.236816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.236852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.237138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.237174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.237396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.237433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.237708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.237746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.237969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.238007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.238190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.238238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.238427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.238462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.238597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.238630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.238775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.238811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.239046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.239228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.239386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.239559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.239791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.239993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.240019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.240252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.240278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.240457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.240481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.857 qpair failed and we were unable to recover it. 00:27:37.857 [2024-11-20 10:06:11.240749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.857 [2024-11-20 10:06:11.240771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.240877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.240901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.241882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.241915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.242060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.242099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.242319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.242353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.242534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.242569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.242747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.242778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.242961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.242992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.243174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.243215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.243391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.243415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.243585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.243609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.243745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.243908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.243931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.244937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.244965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.245179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.245238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.245442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.245476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.245772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.245808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.245948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.245979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.246113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.246145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.246289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.246516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.246552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.246682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.246707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.246883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.247012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.247035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.247258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.247282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.247484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.858 [2024-11-20 10:06:11.247507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:37.858 qpair failed and we were unable to recover it. 00:27:37.858 [2024-11-20 10:06:11.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.859 [2024-11-20 10:06:11.247711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.561710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.562079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.562114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.562366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.562401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.562590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.562626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.562763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.562796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.563068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.563102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.563231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.563266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.563509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.563545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.563670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.563705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.563880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.563912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.564189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.564428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.564464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.564679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.564714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.564858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.564892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.565080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.565362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.127 [2024-11-20 10:06:11.565398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.127 qpair failed and we were unable to recover it. 00:27:38.127 [2024-11-20 10:06:11.565528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.565561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.565751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.565785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.565928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.565962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.566117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.566155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.566301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.566337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.566462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.566495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.566664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.566697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.566886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.566919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.567105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.567142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.567269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.567311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.567443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.567476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.567695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.567729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.567922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.567958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.568135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.568169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.568324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.568360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.568599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.568632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.568832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.568868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.568997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.569030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.569220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.569261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.569455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.569492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.569681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.569716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.569913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.569946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.570129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.570162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.570395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.570440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.570616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.570810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.570842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.570987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.571019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.571143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.571176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.571342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.571380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.571568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.571602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.571847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.571880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.572133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.572177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.572460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.572494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.572713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.572746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.572867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.572900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.573103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.573138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.573343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.573385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.128 [2024-11-20 10:06:11.573510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.128 [2024-11-20 10:06:11.573544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.128 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.573738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.573771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.573969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.574006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.574223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.574258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.574395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.574429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.574716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.574975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.575010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.575260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.575296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.575495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.575528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.575702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.575737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.575859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.576125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.576256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.576291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.576474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.576510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.576730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.577015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.577048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.577279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.577488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.577524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.577723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.577755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.577950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.577983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.578286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.578325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.578450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.578484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.578738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.578772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.578950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.578983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.579099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.579140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.579253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.579288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.579772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.579806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.579952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.579987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.580261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.580295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.580412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.580445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.580687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.580724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.580882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.580916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.581103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.581136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.581325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.581360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.581530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.581573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.581772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.581938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.581972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.582179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.129 [2024-11-20 10:06:11.582226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.129 qpair failed and we were unable to recover it. 00:27:38.129 [2024-11-20 10:06:11.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.582477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.582687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.582727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.582903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.582938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.583076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.583294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.583330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.583517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.583551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.583666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.583888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.583923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.584053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.584086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.584199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.584252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.584442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.584477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.584687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.584719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.584838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.584872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.585153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.585188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.585433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.585467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.585662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.585876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.585909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.586172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.586245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.586440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.586473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.586712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.586756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.586930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.586965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.587223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.587258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.587443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.587476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.587680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.587716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.587913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.587946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.588137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.588171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.588394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.588438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.588652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.588919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.588959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.589144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.589178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.589399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.589436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.589902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.589936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.590109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.590144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.590319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.590356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.590621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.590655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.590944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.590977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.591189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.591240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.130 qpair failed and we were unable to recover it. 00:27:38.130 [2024-11-20 10:06:11.591425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.130 [2024-11-20 10:06:11.591460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.591599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.591633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.591743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.591776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.591952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.591989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.592267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.592304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.592547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.592580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.592682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.592717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.592918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.592953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.593226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.593260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.593405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.593438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.593611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.593654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.593920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.593953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.594071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.594276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.594311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.594552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.594588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.594761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.594795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.594984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.595017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.595146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.595179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.595394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.595431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.595610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.595644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.595885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.596075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.596109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.596294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.596331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.596521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.596554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.596817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.596851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.597087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.597231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.597266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.597479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.597516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.597688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.597722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.597853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.597896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.598025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.598060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.598299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.598341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.598483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.598660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.598701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.598909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.598945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.599224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.599259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.599491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.131 [2024-11-20 10:06:11.599665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.131 qpair failed and we were unable to recover it. 00:27:38.131 [2024-11-20 10:06:11.599842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.599877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.600116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.600151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.600428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.600469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.600656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.600690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.600980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.601013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.601230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.601266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.601446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.601482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.601626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.601659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.601901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.601936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.602144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.602180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.602393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.602429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.602622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.602655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.602938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.602971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.603262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.603480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.603514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.603777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.603811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.604009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.604045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.604254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.604289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.604466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.604500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.604691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.604725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.604878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.604912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.605121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.605154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.605340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.605375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.605615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.605655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.605789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.605824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.605947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.605979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.606169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.606217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.606476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.606518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.606650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.606684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.606876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.606908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.607085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.607119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.607240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.607277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.132 qpair failed and we were unable to recover it. 00:27:38.132 [2024-11-20 10:06:11.607521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.132 [2024-11-20 10:06:11.607557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.607733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.607767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.607909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.607943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.608058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.608092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.608288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.608325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.608515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.608549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.608721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.608754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.608972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.609007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.609135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.609172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.609315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.609350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.609611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.609643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.609841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.609874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.610060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.610096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.610227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.610262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.610392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.610426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.610616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.610649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.610849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.610894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.611096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.611130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.611316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.611352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.611596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.611630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.611830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.611866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.611983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.612016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.612154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.612188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.612397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.612430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.612654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.612691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.612853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.613044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.613078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.613296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.613432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.613467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.613647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.613686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.613962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.613995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.614131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.614166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.614386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.614423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.614560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.614594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.614773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.614807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.614916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.614949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.615063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.615097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.615374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.615411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.133 [2024-11-20 10:06:11.615592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.133 [2024-11-20 10:06:11.615625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.133 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.615860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.615893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.616019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.616065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.616244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.616464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.616497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.616644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.616678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.616857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.616901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.617104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.617138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.617261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.617296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.617538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.617571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.617680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.617723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.617991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.618027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.618146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.618179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.618421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.618456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.618642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.618678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.618922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.618955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.619085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.619122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.619251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.619288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.619477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.619520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.619663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.619697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.619945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.619978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.620081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.620115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.620251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.620287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.620491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.620527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.620645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.620678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.620868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.621078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.621116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.621257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.621294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.621407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.621440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.621724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.621758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.622956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.622992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.623171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.623219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.623415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.623448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.623618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.134 [2024-11-20 10:06:11.623650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.134 qpair failed and we were unable to recover it. 00:27:38.134 [2024-11-20 10:06:11.623763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.623803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.623992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.624027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.624312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.624348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.624474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.624507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.624644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.624683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.624801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.624835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.625043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.625260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.625297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.625561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.625596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.625726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.625760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.625883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.625917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.626043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.626077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.626349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.626392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.626552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.626687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.626719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.626833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.626867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.627043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.627076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.627198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.627250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.627356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.627387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.627574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.627608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.627819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.627859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.628060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.628097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.628226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.628263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.628459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.628492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.628681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.628714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.628850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.628895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.629036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.629070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.629330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.629366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.629557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.629591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.629839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.630030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.630065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.630271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.630306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.630712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.630748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.630964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.630997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.631189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.631241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.631452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.631489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.631691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.135 [2024-11-20 10:06:11.631726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.135 qpair failed and we were unable to recover it. 00:27:38.135 [2024-11-20 10:06:11.631868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.631901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.632118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.632151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.632358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.632395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.632589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.632622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.632728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.632761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.632873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.632906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.633033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.633068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.633273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.633310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.633490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.633523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.633766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.633800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.634029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.634194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.634243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.634419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.634453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.634733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.634770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.635058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.635095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.635354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.635389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.635524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.635753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.635790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.635914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.635948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.636225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.636260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.636394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.636428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.636617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.636655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.636909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.636942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.637120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.637159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.637378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.637418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.637675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.637708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.637819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.637853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.638059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.638092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.638286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.638329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.638520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.638732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.638766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.638954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.638988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.639192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.136 [2024-11-20 10:06:11.639248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.136 qpair failed and we were unable to recover it. 00:27:38.136 [2024-11-20 10:06:11.639489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.639524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.639657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.639690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.639800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.639831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.640024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.640059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.640246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.640282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.640463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.640498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.640747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.640784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.640902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.640935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.641182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.641229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.641415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.641449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.641659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.641695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.641806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.641839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.642052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.642086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.642224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.642262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.642512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.642547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.642813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.642847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.643032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.643066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.643319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.643386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.643591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.643626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.643853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.643889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.644009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.644044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.644247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.644394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.644428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.644641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.644675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.644868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.644902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.645144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.645179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.645313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.645348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.645476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.645509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.645683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.645717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.645900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.645933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.646121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.646375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.646412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.646598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.646632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.646804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.646837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.646975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.647008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.647246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.647282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.647457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.647687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.647721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.137 qpair failed and we were unable to recover it. 00:27:38.137 [2024-11-20 10:06:11.647913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.137 [2024-11-20 10:06:11.647946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.648074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.648348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.648384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.648603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.648636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.648809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.648844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.649028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.649062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.649236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.649271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.649527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.649562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.649760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.649794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.649980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.650122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.650299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.650515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.650736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.650873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.650907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.651032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.651065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.651309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.651344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.651470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.651504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.651628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.651662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.651840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.651873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.652365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.652412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.652689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.652725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.652924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.652958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.653135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.653169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.653695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.653728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.653969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.654003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.654266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.654301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.654443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.654477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.654673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.654706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.654881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.654915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.655049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.655354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.655388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.655520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.655554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.655786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.655904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.655938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.656078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.656112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.138 [2024-11-20 10:06:11.656320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.138 [2024-11-20 10:06:11.656355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.138 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.656470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.656503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.656695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.656728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.656839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.656873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.657092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.657248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.657412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.657628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.657859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.657984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.658016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.658221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.658263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.658466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.658500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.658672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.658705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.658887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.658921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.659127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.659161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.659350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.659385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.659508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.659712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.659747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.659957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.660077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.660111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.660224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.660394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.660427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.660612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.660646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.660831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.660866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.661005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.661039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.661142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.661176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.661415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.661450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.661713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.661747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.661878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.661911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.662128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.662379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.662415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.662597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.662630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.662756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.662790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.662961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.663171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.663215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.663400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.663434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.663614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.663857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.663891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.664094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.139 [2024-11-20 10:06:11.664128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.139 qpair failed and we were unable to recover it. 00:27:38.139 [2024-11-20 10:06:11.664314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.664350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.664618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.664651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.664892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.664925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.665104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.665138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.665330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.665586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.665779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.665813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.665990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.666023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.666236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.666443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.666477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.666681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.666715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.666954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.666987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.667238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.667278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.667398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.667433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.667682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.667888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.667922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.668110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.668143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.668340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.668375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.668558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.668591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.668786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.668820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.669892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.669927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.670053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.670086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.670294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.670330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.670519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.670552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.670725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.670758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.671059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.671243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.671278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.671456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.671489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.671721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.671755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.671881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.671914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.672039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.672073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.672342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.672377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.672495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.140 [2024-11-20 10:06:11.672529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.140 qpair failed and we were unable to recover it. 00:27:38.140 [2024-11-20 10:06:11.672712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.672746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.672920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.672954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.673263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.673455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.673488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.673655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.673847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.673880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.674008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.674041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.674246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.674281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.674477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.674509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.674636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.674669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.674887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.674921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.675170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.675211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.675335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.675368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.675543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.675577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.675760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.675793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.676016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.676052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.676268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.676304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.676477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.676511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.676633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.676667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.676908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.676941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.677115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.677149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.677376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.677560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.677595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.677714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.677747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.677984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.678018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.678193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.678477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.678511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.678692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.678725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.678916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.678951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.679144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.679177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.679435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.679470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.679581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.679754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.679787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.141 [2024-11-20 10:06:11.679972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.141 [2024-11-20 10:06:11.680004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.141 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.680216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.680251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.680495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.680529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.680734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.680960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.681166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.681200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.681427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.681460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.681649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.681683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.681817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.681850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.682031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.682071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.682215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.682249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.682533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.682568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.682809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.682842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.683069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.683224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.683260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.683469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.683503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.683688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.683967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.684001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.684189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.684436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.684829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.684864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.685047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.685081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.685444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.685478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.685672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.685705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.685828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.685863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.686076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.686291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.686518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.686684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.686893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.686997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.687031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.687214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.687248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.687457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.687642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.687676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.687851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.687885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.688156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.688190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.688374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.142 [2024-11-20 10:06:11.688409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.142 qpair failed and we were unable to recover it. 00:27:38.142 [2024-11-20 10:06:11.688513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.688546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.688679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.688712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.688830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.688864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.689053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.689087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.689365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.689603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.689636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.689771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.689805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.690070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.690105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.690220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.690255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.690432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.690465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.690586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.690620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.690842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.690875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.691064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.691098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.691225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.691260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.691455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.691488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.691669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.691878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.691912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.692037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.692071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.692259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.692294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.692469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.692503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.692678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.692712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.692816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.692849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.693022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.693056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.693248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.693284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.693521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.693554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.693771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.693966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.694130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.694578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.694719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.694892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.695012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.695340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.695375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.695525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.695696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.695730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.695913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.695948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.696125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.143 [2024-11-20 10:06:11.696159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.143 qpair failed and we were unable to recover it. 00:27:38.143 [2024-11-20 10:06:11.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.144 [2024-11-20 10:06:11.696403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.144 qpair failed and we were unable to recover it. 00:27:38.144 [2024-11-20 10:06:11.696609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.144 [2024-11-20 10:06:11.696643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.144 qpair failed and we were unable to recover it. 00:27:38.144 [2024-11-20 10:06:11.696885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.696918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.697039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.697073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.697188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.697411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.697446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.697712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.697745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.697985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.698018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.698280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.698316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.698491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.698526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.698665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.698698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.698955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.698990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.699115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.699150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.699269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.699304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.699463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.699605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.699639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.699924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.699957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.700153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.700187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.700409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.700443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.700550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.700849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.700882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.701162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.701196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.701431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.701466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.701767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.701801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.701992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.423 [2024-11-20 10:06:11.702027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.423 qpair failed and we were unable to recover it. 00:27:38.423 [2024-11-20 10:06:11.702278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.702313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.702487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.702521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.702733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.702767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.703014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.703047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.703167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.703209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.703401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.703434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.703702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.703736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.703923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.703956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.704140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.704175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.704314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.704348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.704606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.704639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.704758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.704792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.704964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.704997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.705110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.705144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.705259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.705503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.705535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.705709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.705748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.705864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.705898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.706093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.706127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.706374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.706498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.706531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.706781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.706815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.706914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.706948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.707155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.707190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.707438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.707472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.707687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.707721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.707849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.707881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.708073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.708107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.708392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.708620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.708653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.708865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.708900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.709118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.709152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.709437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.709472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.709711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.709745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.709987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.710021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.710218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.710253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.710431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.710465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.710671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.424 [2024-11-20 10:06:11.710705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.424 qpair failed and we were unable to recover it. 00:27:38.424 [2024-11-20 10:06:11.710832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.710866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.711078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.711112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.711295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.711332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.711516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.711550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.711737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.711771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.711947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.711987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.712167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.712200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.712337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.712371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.712617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.712651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.712764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.712796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.713061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.713095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.713338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.713374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.713574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.713608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.713718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.713753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.713975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.714158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.714192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.714447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.714481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.714682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.714716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.714935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.714968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.715091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.715126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.715319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.715355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.715532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.715565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.715740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.715774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.715972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.716005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.716111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.716145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.716364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.716399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.716656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.716689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.716994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.717236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.717497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.717532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.717745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.717779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.717963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.717997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.718194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.718238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.718429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.718463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.718724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.718757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.719048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.719167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.719200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.719417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.425 [2024-11-20 10:06:11.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.425 qpair failed and we were unable to recover it. 00:27:38.425 [2024-11-20 10:06:11.719700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.719733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.719941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.719975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.720080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.720113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.720384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.720420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.720541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.720575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.720765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.720798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.720975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.721127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.721294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.721446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.721666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.721941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.721975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.722191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.722234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.722363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.722397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.722675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.722708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.722905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.722938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.723183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.723224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.723411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.723445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.723650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.723683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.723921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.723955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.724049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.724082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.724217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.724253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.724431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.724465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.724639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.724809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.724843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.725028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.725062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.725301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.725338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.725450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.725723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.725758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.725864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.725897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.726081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.726115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.726305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.726340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.726523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.726556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.726726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.726760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.726926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.726959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.727153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.727192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.727401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.727435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.727604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.426 [2024-11-20 10:06:11.727637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.426 qpair failed and we were unable to recover it. 00:27:38.426 [2024-11-20 10:06:11.727811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.727844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.728020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.728054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.728324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.728359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.728599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.728633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.728804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.728837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.729034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.729068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.729260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.729295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.729474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.729507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.729616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.729651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.729860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.729893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.730167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.730211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.730505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.730539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.730730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.730959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.730992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.731281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.731317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.731507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.731540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.731800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.731834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.732077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.732111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.732306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.732341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.732464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.732498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.732622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.732846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.733094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.733128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.733355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.733391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.733566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.733599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.733863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.733897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.734109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.734142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.734395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.734430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.734571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.734605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.734742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.734776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.734967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.735002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.735273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.735309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.735507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.735540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.735764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.735799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.427 [2024-11-20 10:06:11.736062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.427 [2024-11-20 10:06:11.736096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.427 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.736383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.736418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.736605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.736765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.736799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.736976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.737014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.737124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.737158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.737353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.737387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.737556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.737589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.737772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.737805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.738064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.738098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.738287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.738322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.738426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.738458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.738639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.738673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.738845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.738878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.739139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.739172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.739371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.739406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.739574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.739717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.739750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.739980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.740086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.740120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.740380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.740415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.740625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.740658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.740850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.740884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.741097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.741131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.741261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.741296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.741497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.741530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.741736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.741771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.742010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.742044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.742231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.742484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.742518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.742756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.742790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.742988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.743023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.743332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.743456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.743490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.743678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.743712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.743840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.428 [2024-11-20 10:06:11.743873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.428 qpair failed and we were unable to recover it. 00:27:38.428 [2024-11-20 10:06:11.743996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.744030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.744244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.744279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.744463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.744496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.744598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.744632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.744815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.744849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.745067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.745101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.745278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.745312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.745594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.745628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.745741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.745774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.746022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.746057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.746173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.746234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.746498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.746531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.746712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.746747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.746954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.746988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.747184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.747229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.747480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.747514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.747738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.747866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.747900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.748156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.748190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.748381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.748414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.748582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.748789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.748822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.748997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.749032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.749226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.749261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.749433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.749467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.749662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.749696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.749823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.749858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.749973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.750142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.750176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.750369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.750404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.750669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.750703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.750888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.750922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.751049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.751082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.751267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.751302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.751428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.751462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.751689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.751796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.752019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.429 [2024-11-20 10:06:11.752053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.429 qpair failed and we were unable to recover it. 00:27:38.429 [2024-11-20 10:06:11.752298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.752332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.752617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.752650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.752784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.752817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.752940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.752974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.753199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.753245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.753361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.753395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.753519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.753554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.753670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.753886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.753921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.754112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.754286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.754319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.754510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.754543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.754783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.754816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.755003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.755037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.755222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.755257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.755437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.755470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.755663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.755698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.755882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.755922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.756091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.756125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.756308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.756344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.756521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.756555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.756762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.756796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.756979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.757220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.757364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.757508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.757738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.757891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.757924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.758871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.758905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.759069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.759267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.430 [2024-11-20 10:06:11.759302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.430 qpair failed and we were unable to recover it. 00:27:38.430 [2024-11-20 10:06:11.759404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.759635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.759669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.759858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.759892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.760088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.760122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.760318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.760354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.760562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.760594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.760709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.760743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.760866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.760900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.761160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.761194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.761327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.761361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.761498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.761532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.761710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.762009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.762043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.762234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.762267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.762446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.762480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.762583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.762616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.762825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.762860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.763035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.763069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.763239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.763273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.763398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.763431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.763604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.763637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.763846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.763879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.764000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.764157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.764190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.764391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.764424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.764612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.764645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.764883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.764917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.765215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.765346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.765379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.765480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.765512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.765639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.765679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.765815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.765848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.431 qpair failed and we were unable to recover it. 00:27:38.431 [2024-11-20 10:06:11.766957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.431 [2024-11-20 10:06:11.766990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.767199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.767242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.767354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.767388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.767565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.767598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.767793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.767826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.767999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.768033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.768228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.768263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.768390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.768423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.768634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.768668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.768787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.768820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.769028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.769061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.769246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.769281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.769397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.769431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.769623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.769657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.769877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.769911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.770095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.770129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.770318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.770353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.770548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.770581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.770684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.770718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.770833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.770866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.771110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.771144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.771283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.771318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.771433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.771467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.771639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.771672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.771911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.771946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.772060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.772093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.772293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.772329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.772594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.772627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.772813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.772847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.773040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.773074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.773197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.773241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.773488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.773520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.773727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.773761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.432 qpair failed and we were unable to recover it. 00:27:38.432 [2024-11-20 10:06:11.773935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.432 [2024-11-20 10:06:11.773968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.774257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.774499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.774710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.774744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.774958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.774991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.775177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.775220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.775342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.775375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.775620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.775654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.775783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.775816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.775929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.775963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.776144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.776177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.776455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.776490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.776607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.776640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.776814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.776848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.777213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.777249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.777359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.777393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.777520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.777727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.777760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.778035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.778188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.778437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.778593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.778808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.778991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.779025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.779222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.779257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.779442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.779476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.779749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.779783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.779907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.779947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.780143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.780477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.780509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.780712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.780746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.780873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.780907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.781014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.781048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.781175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.781218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.781391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.781424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.781633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.781667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.781859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.781893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.433 qpair failed and we were unable to recover it. 00:27:38.433 [2024-11-20 10:06:11.782066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.433 [2024-11-20 10:06:11.782099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.782369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.782404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.782514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.782544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.782783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.782817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.783067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.783102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.783297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.783332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.783536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.783570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.783686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.783720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.783867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.784065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.784099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.784289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.784323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.784449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.784483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.784591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.784624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.784759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.784792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.785041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.785220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.785254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.785426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.785460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.785699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.785732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.785848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.785882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.786116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.786150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.786350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.786384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.786629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.786662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.786907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.786941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.787114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.787147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.787358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.787393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.787571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.787604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.787795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.787829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.788015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.788048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.788235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.788271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.788461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.788494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.788612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.788646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.788885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.788924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.789114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.789147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.789275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.789309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.789483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.789517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.789765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.789798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.789958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.790143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.790176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.434 [2024-11-20 10:06:11.790366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.434 [2024-11-20 10:06:11.790400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.434 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.790529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.790562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.790754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.790788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.791025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.791231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.791266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.791460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.791494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.791705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.791738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.791946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.791980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.792200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.792396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.792430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.792711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.792745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.792939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.792971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.793157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.793191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.793401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.793435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.793645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.793678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.793893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.793925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.794058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.794092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.794217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.794252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.794449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.794483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.794605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.794638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.794771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.794811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.795114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.795146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.795356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.795391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.795558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.795689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.795722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.795898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.795933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.796107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.796341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.796376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.796591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.796625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.796814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.796848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.797032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.797066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.797311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.797349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.797552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.797585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.797703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.797737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.797867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.797901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.798077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.798113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.798334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.798369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.798556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.798590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.435 [2024-11-20 10:06:11.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.435 [2024-11-20 10:06:11.798745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.435 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.798897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.798930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.799121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.799155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.799378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.799411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.799533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.799567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.799692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.799726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.800006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.800134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.800168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.800311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.800346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.800614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.800647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.800832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.800866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.801126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.801303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.801339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.801469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.801502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.801675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.801709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.801897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.801932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.802088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.802318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.802470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.802708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.802867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.802980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.803014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.803155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.803189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.803437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.803633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.803667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.803772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.803806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.803974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.804112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.804261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.804477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.804692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.804916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.804950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.805072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.805106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.805316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.805372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.805499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.805533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.805661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.805695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.805817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.805851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.806150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.436 [2024-11-20 10:06:11.806183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.436 qpair failed and we were unable to recover it. 00:27:38.436 [2024-11-20 10:06:11.806405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.806440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.806610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.806643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.806829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.806870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.806985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.807017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.807123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.807157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.807300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.807336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.807579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.807612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.807807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.807844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.808030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.808064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.808274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.808311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.808460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.808494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.808635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.808672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.808847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.808888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.809010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.809043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.809259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.809294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.809481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.809525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.809661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.809695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.809875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.809908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.810114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.810148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.810405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.810443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.810583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.810617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.810799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.810833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.810940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.810972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.811155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.811195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.811409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.811444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.811620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.811653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.811894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.811981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.812130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.812167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.812376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.812413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.812593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.812626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.812748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.812782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.812911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.812946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.813072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.813106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.437 qpair failed and we were unable to recover it. 00:27:38.437 [2024-11-20 10:06:11.813280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.437 [2024-11-20 10:06:11.813317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.813497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.813530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.813720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.813754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.813867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.813900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.814096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.814130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.814271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.814306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.814506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.814547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.814791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.815045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.815079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.815200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.815245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.815372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.815405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.815608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.815643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.815820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.815853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.816073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.816283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.816318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.816432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.816466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.816649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.816683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.816824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.816858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.817072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.817106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.817228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.817263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.817474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.817509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.817683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.817717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.817851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.817886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.818088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.818123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.818334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.818513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.818547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.818792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.818826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.819003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.819037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.819228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.819451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.819485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.819756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.819790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.819933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.819966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.820140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.820175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.820402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.820444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.820622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.820665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.820794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.820828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.820948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.438 [2024-11-20 10:06:11.820980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.438 qpair failed and we were unable to recover it. 00:27:38.438 [2024-11-20 10:06:11.821161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.821196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.821415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.821449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.821571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.821607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.821780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.821813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.822058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.822092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.822362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.822406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.822600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.822813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.822846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.823079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.823250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.823427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.823568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.823792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.823989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.824027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.824227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.824266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.824469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.824504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.824628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.824662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.824864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.824899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.825092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.825127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.825314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.825350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.825556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.825589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.825809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.826007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.826042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.826243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.826284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.826579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.826620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.826736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.826771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.826962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.826995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.827119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.827153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.827366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.827406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.827607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.827642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.827759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.827792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.827988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.828020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.828216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.828252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.828479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.828514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.828709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.828742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.828872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.828906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.829079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.829115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.829247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.439 [2024-11-20 10:06:11.829284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.439 qpair failed and we were unable to recover it. 00:27:38.439 [2024-11-20 10:06:11.829490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.829522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.829705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.829739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.829866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.829900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.830019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.830055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.830314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.830351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.830463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.830499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.830681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.830896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.830940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.831184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.831235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.831433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.831576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.831609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.831823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.832036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.832069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.832261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.832297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.832422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.832459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.832578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.832619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.832834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.833032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.833317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.833352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.833549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.833586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.833802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.833836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.833961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.833995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.834195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.834242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.834370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.834413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.834656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.834800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.834832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.835064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.835289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.835465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.835688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.835831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.835970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.836132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.836318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.836536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.836756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.836917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.836954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.440 qpair failed and we were unable to recover it. 00:27:38.440 [2024-11-20 10:06:11.837064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.440 [2024-11-20 10:06:11.837097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.837236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.837272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.837461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.837495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.837677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.837719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.838036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.838265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.838492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.838651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.838875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.838990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.839024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.839239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.839446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.839483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.839685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.839917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.839950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.840167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.840216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.840450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.840695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.840735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.840915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.840948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.841135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.841170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.841498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.841536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.841659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.841693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.841821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.841855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.841978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.842159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.842461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.842608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.842818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.842957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.842991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.843184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.843236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.843424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.843458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.843591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.843623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.843862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.843906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.844073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.844231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.844452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.844707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.844860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.441 qpair failed and we were unable to recover it. 00:27:38.441 [2024-11-20 10:06:11.844997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.441 [2024-11-20 10:06:11.845032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.845219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.845254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.845434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.845469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.845650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.845764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.845798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.845999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.846032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.846139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.846172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.846321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.846357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.846485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.846526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.846724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.846757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.846982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.847239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.847403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.847613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.847783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.847943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.847977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.848176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.848424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.848458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.848670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.848702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.848824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.848858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.849900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.849936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.850867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.850902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.851026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.442 [2024-11-20 10:06:11.851059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.442 qpair failed and we were unable to recover it. 00:27:38.442 [2024-11-20 10:06:11.851180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.851229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.851352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.851385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.851500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.851537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.851654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.851691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.851813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.851845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.852957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.852990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.853108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.853150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.853341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.853378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.853490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.853523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.853639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.853678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.853945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.853991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.854927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.854963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.855153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.855188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.855345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.855384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.855537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.855659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.855694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.855866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.855899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.856087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.856120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.856264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.856299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.856416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.856458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.856576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.856607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.856724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.856756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.857915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.443 [2024-11-20 10:06:11.857950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.443 qpair failed and we were unable to recover it. 00:27:38.443 [2024-11-20 10:06:11.858066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.858216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.858364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.858574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.858788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.858953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.858984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.859118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.859333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.859492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.859649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.859861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.859981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.860124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.860361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.860496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.860705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.860849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.860879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.861959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.861989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.862189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.862362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.862573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.862719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.862861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.863811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.863985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.864018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.864177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.864297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.864330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.864586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.444 [2024-11-20 10:06:11.864617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.444 qpair failed and we were unable to recover it. 00:27:38.444 [2024-11-20 10:06:11.864741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.864779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.864886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.864918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.865965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.865997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.866093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.866124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.866247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.866281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.866406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.866437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.866606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.866642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.866810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.866840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.867014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.867045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.867284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.867321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.867571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.867602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.867725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.867755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.867866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.867897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.868100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.868136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.868272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.868306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.868440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.868666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.868696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.868810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.868840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.869853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.869886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.870071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.870104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.870366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.870514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.870545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.870648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.870678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.870856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.870889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.871084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.871115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.871283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.871315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.871433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.871463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.871593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.445 [2024-11-20 10:06:11.871634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.445 qpair failed and we were unable to recover it. 00:27:38.445 [2024-11-20 10:06:11.871750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.871781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.871897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.871928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.872043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.872073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.872306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.872338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.872473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.872505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.872695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.872725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.872895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.872926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.873094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.873125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.873238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.873272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.873460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.873502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.873733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.873764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.873934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.873964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.874936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.874969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.875168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.875200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.875395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.875615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.875646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.875767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.875799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.875908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.875939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.876090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.876121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.876294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.876327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.876505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.876542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.876751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.876785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.876964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.876995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.877282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.877314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.877428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.877463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.877635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.877666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.877834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.877864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.878096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.878127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.878256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.446 [2024-11-20 10:06:11.878289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.446 qpair failed and we were unable to recover it. 00:27:38.446 [2024-11-20 10:06:11.878414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.878445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.878614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.878644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.878755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.878790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.878978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.879009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.879165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.879359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.879391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.879579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.879610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.879725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.879992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.880151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.880183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.880378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.880409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.880527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.880557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.880730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.880760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.881959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.881991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.882153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.882387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.882420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.882522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.882559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.882745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.882777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.882891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.882922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.883058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.883088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.883196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.883241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.883477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.883510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.883623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.883652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.883765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.883797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.884936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.884967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.885140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.885174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.885295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.885327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.447 [2024-11-20 10:06:11.885445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.447 [2024-11-20 10:06:11.885475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.447 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.885582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.885613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.885715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.885746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.885891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.885923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.886106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.886139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.886308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.886340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.886578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.886615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.886872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.886905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.887808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.887839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.888089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.888120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.888251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.888284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.888462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.888499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.888653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.888798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.888831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.889033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.889067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.889312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.889524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.889558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.889674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.889707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.889953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.889986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.890197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.890356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.890391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.890521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.890554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.890732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.890765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.890872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.890906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.891105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.891141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.891279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.891315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.891502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.891536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.891651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.891684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.891868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.891904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.892105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.892141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.892352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.892388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.892514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.892548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.892723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.448 [2024-11-20 10:06:11.892833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.448 [2024-11-20 10:06:11.892867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.448 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.892975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.893008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.893183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.893233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.893412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.893447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.893590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.893629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.893905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.893937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.894054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.894088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.894269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.894305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.894483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.894518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.894712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf14af0 is same with the state(6) to be set 00:27:38.449 [2024-11-20 10:06:11.895120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.895221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.895473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.895516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.895726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.895762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.895875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.895909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.896964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.896999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.897132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.897180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.897405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.897455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.897609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.897655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.897941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.897988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.898255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.898308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.898627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.898675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.898898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.899094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.899144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.899407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.899449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.899570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.899604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.899736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.899769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.899953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.899992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.900175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.900222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.900336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.900369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.900537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.900571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.900687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.900721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.900926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.901112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.901145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.902666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.449 [2024-11-20 10:06:11.902724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.449 qpair failed and we were unable to recover it. 00:27:38.449 [2024-11-20 10:06:11.902876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.902912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.903103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.903137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.903323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.903361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.903666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.903810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.903843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.904104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.904424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.904573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.904865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.904980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.905174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.905583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.905724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.905958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.906143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.906385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.906420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.906559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.906592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.906729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.906764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.906958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.906991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.907994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.908170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.908217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.908483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.908519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.908646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.908680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.908929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.908963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.909139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.909475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.909543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.909692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.909729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.909859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.909893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.910088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.910122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.910325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.910360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.910483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.450 [2024-11-20 10:06:11.910515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.450 qpair failed and we were unable to recover it. 00:27:38.450 [2024-11-20 10:06:11.910805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.910863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.911037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.911084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.911240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.911289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.911509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.911551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.911665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.911699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.911829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.911865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.912040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.912073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.912185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.912235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.912418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.912451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.912680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.912869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.912903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.913026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.913059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.913300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.913335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.913531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.913568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.913694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.913729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.913839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.913873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.914899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.914933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.915221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.915258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.915440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.915473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.915674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.915830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.915866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.916108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.916143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.916332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.916374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.916495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.916528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.916704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.916746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.916923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.916957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.451 qpair failed and we were unable to recover it. 00:27:38.451 [2024-11-20 10:06:11.917082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.451 [2024-11-20 10:06:11.917115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.917236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.917271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.917415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.917447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.917584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.917622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.918106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.918138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.918303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.918337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.918520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.918555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.918805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.918838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.918973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.919008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.919138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.919172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.919454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.919521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.919839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.919890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.920168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.920443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.920491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.920637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.920685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.920897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.920945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.921171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.921229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.921506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.921545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.921676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.921709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.921838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.921871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.922091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.922124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.922241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.922276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.922469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.922510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.922699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.922732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.922842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.922877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.923059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.923108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.923317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.923367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.923582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.923629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.923790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.923836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.923974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.924023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.924231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.924282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.924442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.924491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.924648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.925015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.925062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.925257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.925300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.925517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.925550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.452 [2024-11-20 10:06:11.925798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.452 [2024-11-20 10:06:11.925833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.452 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.926048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.926085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.926193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.926241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.926468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.926502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.926696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.926730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.926942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.926979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.927175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.927222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.927363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.927399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.927503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.927536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.927779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.927816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.928093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.928127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.928252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.928532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.928569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.928689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.928922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.928955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.929091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.929125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.929303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.929344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.929474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.929508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.929627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.929660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.929864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.929898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.930005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.930038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.930244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.930281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.930482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.930515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.930713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.930934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.930968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.931171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.931218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.931336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.931370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.931619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.931656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.931914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.931950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.932139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.932173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.932377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.932462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.932748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.932998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.933032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.933272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.933306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.933440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.933474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.933688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.934049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.934245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.453 [2024-11-20 10:06:11.934295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.453 qpair failed and we were unable to recover it. 00:27:38.453 [2024-11-20 10:06:11.934595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.934644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.934885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.934933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.935232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.935519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.935570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.935718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.935753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.936011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.936045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.936170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.936213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.936340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.936373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.936559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.936593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.936785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.936818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.937083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.937117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.937312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.937363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.937667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.937713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.938020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.938069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.938315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.938368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.938648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.938696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.939012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.939059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.939346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.939388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.939514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.939548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.939804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.939837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.940100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.940134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.940397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.940431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.940610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.940645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.940753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.940788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.940925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.940959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.941244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.941294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.941506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.941555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.941784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.941831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.942041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.942334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.942404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.942755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.942865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.942902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.943051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.943098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.943232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.943282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.943581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.943628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.943846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.943892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.944169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.944230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.454 qpair failed and we were unable to recover it. 00:27:38.454 [2024-11-20 10:06:11.944398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.454 [2024-11-20 10:06:11.944444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.944772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.944821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.945065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.945112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.945293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.945330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.945460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.945701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.945744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.946164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.946213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.946346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.946380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.946563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.946597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.946817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.946851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.947030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.947064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.947187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.947228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.947425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.947461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.947675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.947726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.947947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.947995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.948272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.948321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.948519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.948713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.948763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.949048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.949094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.949410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.949460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.949690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.949731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.949924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.949958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.950152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.950186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.950396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.950430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.950615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.950648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.950771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.950805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.950992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.951026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.951148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.951182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.951418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.951466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.951769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.951817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.951980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.952026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.952257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.952308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.952531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.952578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.952880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.952926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.953144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.953194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.953417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.455 [2024-11-20 10:06:11.953457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.455 qpair failed and we were unable to recover it. 00:27:38.455 [2024-11-20 10:06:11.953704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.953738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.953863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.953895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.954084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.954119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.954247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.954283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.954493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.954526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.954642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.954675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.954857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.954890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.955024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.955072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.955345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.955401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.955633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.955680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.955988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.956035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.956179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.956239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.956448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.956495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.956660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.956706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.956920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.956968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.957239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.957494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.957528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.957724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.957758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.957872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.957906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.958178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.958224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.958504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.958538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.958752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.958786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.959086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.959134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.959298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.959348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.959561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.959836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.959881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.960159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.960223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.960378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.960427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.960654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.960700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.960990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.961215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.961266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.961491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.456 [2024-11-20 10:06:11.961839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.456 [2024-11-20 10:06:11.961887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.456 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.962091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.962141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.962385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.962436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.962683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.962753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.962908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.962945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.963147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.963184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.963451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.963485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.963754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.963788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.963961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.964192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.964242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.964366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.964400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.964692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.964729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.964947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.964983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.965161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.965194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.965461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.965506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.965721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.965755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.965928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.965962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.966150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.966184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.966402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.966438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.966722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.966757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.966964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.966998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.967189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.967245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.967446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.967480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.967624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.967657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.967841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.967875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.968102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.968138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.968419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.968455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.968697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.968989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.969026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.969225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.969260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.969391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.969433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.969646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.969682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.969862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.969896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.970010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.970042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.970262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.970387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.970423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.970732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.970784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.971036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.457 [2024-11-20 10:06:11.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.457 qpair failed and we were unable to recover it. 00:27:38.457 [2024-11-20 10:06:11.971253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.971299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.971459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.971506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.971659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.971707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.971906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.972134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.972181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.972529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.972583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.972868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.972916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.973130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.973180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.973414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.973465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.973707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.973744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.974013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.974046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.974293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.974328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.974452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.974486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.974727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.974760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.974952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.974986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.975257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.975308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.975460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.975507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.975737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.975786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.975988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.976035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.976256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.976298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.976427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.976462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.976707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.976739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.976876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.976911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.977178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.977241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.977507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.977540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.977782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.977926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.977966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.978231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.978265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.978403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.978437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.978628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.978663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.978852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.978889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.979068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.979102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.979241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.979277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.979479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.979513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.979724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.979760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.979902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.979936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.980121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.458 [2024-11-20 10:06:11.980154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.458 qpair failed and we were unable to recover it. 00:27:38.458 [2024-11-20 10:06:11.980356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.459 [2024-11-20 10:06:11.980397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.459 qpair failed and we were unable to recover it. 00:27:38.459 [2024-11-20 10:06:11.980650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.459 [2024-11-20 10:06:11.980685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.459 qpair failed and we were unable to recover it. 00:27:38.459 [2024-11-20 10:06:11.980949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.459 [2024-11-20 10:06:11.980982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.459 qpair failed and we were unable to recover it. 00:27:38.459 [2024-11-20 10:06:11.981169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.459 [2024-11-20 10:06:11.981216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.459 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.981361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.981396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.981717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.981948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.981981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.982192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.982249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.982527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.982559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.982800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.982839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.982967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.983002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.983148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.983179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.983432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.983467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.983650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.983681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.983925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.983959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.739 [2024-11-20 10:06:11.984237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.739 [2024-11-20 10:06:11.984270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.739 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.984462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.984493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.984698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.984860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.984892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.985159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.985377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.985414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.985614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.985648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.985758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.985790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.985919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.985951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.986079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.986110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.986261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.986296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.986478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.986509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.986712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.986744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.986861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.986893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.987066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.987109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.987248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.987282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.987456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.987488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.987745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.987777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.987981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.988015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.988261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.988296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.988510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.988547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.988675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.988710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.988841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.988876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.989908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.989940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.990057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.990088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.990300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.990343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.990480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.990777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.990809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.990994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.991027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.991230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.991265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.991456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.991489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.991614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.991646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.740 [2024-11-20 10:06:11.991748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.740 [2024-11-20 10:06:11.991780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.740 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.991984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.992019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.992297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.992501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.992532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.992718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.992753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.992931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.992963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.993085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.993117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.993259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.993294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.993471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.993513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.993690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.993722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.993841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.993872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.994002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.994035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.994242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.994277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.994464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.994498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.994669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.994701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.994879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.994912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.995051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.995081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.995301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.995337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.995575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.995608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.995845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.995877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.996049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.996083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.996228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.996263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.996374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.996406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.996558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.996826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.997119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.997151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.997429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.997464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.997647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.997687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.997811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.997844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.997961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.997993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.998128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.998287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.998452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.998788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.998973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.999004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.999184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.999233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.999377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.999421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.741 [2024-11-20 10:06:11.999542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.741 [2024-11-20 10:06:11.999575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.741 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:11.999683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:11.999715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:11.999851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:11.999883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.000053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.000085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.000215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.000250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.000448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.000483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.000661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.000692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.000931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.000963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.001079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.001114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.001334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.001369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.001544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.001577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.001694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.001726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.001856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.001891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.002058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.002280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.002493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.002701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.002873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.002997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.003223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.003362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.003568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.003791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.003938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.003970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.004157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.004188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.004377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.004416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.004646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.004678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.004863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.004896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.005080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.005111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.005287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.005330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.005466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.005498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.005679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.005712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.005847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.006278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.006633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.742 [2024-11-20 10:06:12.006790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.742 qpair failed and we were unable to recover it. 00:27:38.742 [2024-11-20 10:06:12.006908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.006943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.007076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.007111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.007250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.007285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.007529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.007562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.007754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.007787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.007931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.007964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.008160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.008192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.008405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.008437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.008625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.008659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.008859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.008893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.009024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.009056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.009241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.009502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.009677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.009710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.009951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.009982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.010406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.010447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.010662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.010853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.010884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.011052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.011093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.011288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.011322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.011507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.011540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.011665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.011696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.011817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.011849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.012043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.012078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.012259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.012293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.012412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.012443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.012618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.012651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.012863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.013053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.013084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.013215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.013252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.013524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.013558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.013742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.013776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.013901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.013934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.014051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.014083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.014298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.014333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.743 [2024-11-20 10:06:12.014554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.743 [2024-11-20 10:06:12.014588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.743 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.014718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.014749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.014934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.014966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.015096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.015129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.015353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.015390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.015576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.015609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.015736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.015768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.016049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.016088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.016229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.016264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.016461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.016493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.016628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.016660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.016765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.016798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.017916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.017948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.018083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.018268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.018302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.018425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.018457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.018662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.018703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.018834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.018867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.019132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.019164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.019295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.019330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.019596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.019631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.019882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.019914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.020028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.020060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.020179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.020235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.020468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.020503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.020631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.744 [2024-11-20 10:06:12.020663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.744 qpair failed and we were unable to recover it. 00:27:38.744 [2024-11-20 10:06:12.020842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.020874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.020989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.021021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.021270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.021306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.021494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.021527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.021732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.021764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.021978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.022013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.022211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.022246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.022461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.022492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.022733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.022932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.022966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.023120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.023349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.023493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.023705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.023981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.024013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.024219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.024252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.024380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.024418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.024605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.024638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.024820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.024851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.025058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.025090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.025227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.025577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.025612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.025716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.025749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.025920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.026090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.026281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.026494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.026696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.026840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.026965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.027008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.027122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.027155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.027311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.027346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.027526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.027558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.027838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.027873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.028084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.028322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.028454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.745 [2024-11-20 10:06:12.028487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.745 qpair failed and we were unable to recover it. 00:27:38.745 [2024-11-20 10:06:12.028598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.028633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.028822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.028856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.028997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.029242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.029276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.029453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.029497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.029612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.029644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.029811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.029928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.029960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.030074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.030106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.030296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.030331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.030566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.030600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.030706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.030738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.030852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.030883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.031074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.031105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.031312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.031348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.031616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.031648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.031834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.031866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.032048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.032084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.032340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.032374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.032513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.032544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.032728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.032772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.033101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.033136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.033323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.033357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.033489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.033520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.033752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.033787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.033921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.033953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.034169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.034213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.034356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.034390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.034578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.034612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.034869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.034901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.035092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.035125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.035245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.035483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.035517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.035743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.036012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.036045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.036336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.036373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.036515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.036547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.746 qpair failed and we were unable to recover it. 00:27:38.746 [2024-11-20 10:06:12.036751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.746 [2024-11-20 10:06:12.036783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.037007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.037224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.037261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.037441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.037473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.037598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.037629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.037898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.037932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.038183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.038231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.038427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.038460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.038599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.038641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.038857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.038892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.039038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.039300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.039335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.039539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.039579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.039708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.039741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.039957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.039988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.040191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.040365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.040399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.040588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.040621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.040893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.040925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.041055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.041087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.041336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.041529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.041752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.041784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.041976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.042009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.042224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.042258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.042453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.042749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.042781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.043278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.043311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.043502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.043532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.043752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.043790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.044026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.044230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.044266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.044485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.044518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.044765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.044800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.044972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.045004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.045280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.747 [2024-11-20 10:06:12.045313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.747 qpair failed and we were unable to recover it. 00:27:38.747 [2024-11-20 10:06:12.045536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.045571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.045787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.045820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.046043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.046335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.046370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.046477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.046510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.046633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.046663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.046785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.046816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.047081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.047125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.047322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.047357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.047560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.047593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.047790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.047822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.048004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.048039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.048302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.048519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.048551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.048755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.048803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.049013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.049282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.049316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.049564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.049803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.049835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.050032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.050063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.050250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.050282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.050488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.050522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.050785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.050818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.051020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.051051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.051246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.051286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.051529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.051562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.051707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.051739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.051937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.051969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.052235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.052272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.052480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.052512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.052628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.052944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.052979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.053213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.053247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.053561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.053594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.053791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.053831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.054157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.054374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.054408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.748 [2024-11-20 10:06:12.054650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.748 [2024-11-20 10:06:12.054685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.748 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.054984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.055016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.055274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.055310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.055498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.055536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.055791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.055830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.056099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.056132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.056306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.056342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.056588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.056620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.056927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.056960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.057250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.057286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.057479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.057510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.057720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.057752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.057955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.057990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.058257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.058291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.058518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.058551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.058734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.058768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.059054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.059088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.059287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.059626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.059711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.059979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.060031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.060241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.060288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.060504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.060555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.060864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.060898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.061091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.061123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.061409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.061436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.061637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.061795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.062073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.062097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.062354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.062392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.062544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.062580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.062754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.062791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.063061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.063110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.063331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.749 [2024-11-20 10:06:12.063370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.749 qpair failed and we were unable to recover it. 00:27:38.749 [2024-11-20 10:06:12.063503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.063541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.063790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.064031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.064057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.064327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.064354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.064516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.064541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.064741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.064918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.064943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.065150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.065176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.065348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.065374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.065629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.065667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.065820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.065871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.066073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.066109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.066365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.066556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.066592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.066796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.066833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.067023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.067062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.067296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.067323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.067466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.067694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.067719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.068030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.068054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.068329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.068355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.068525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.068742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.068775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.068987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.069024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.069189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.069234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.069552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.069620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.069946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.069983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.070180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.070244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.070465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.070513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.070783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.070829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.071118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.071168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.071410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.071454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.071588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.071617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.071776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.071989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.072031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.750 [2024-11-20 10:06:12.072348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.750 [2024-11-20 10:06:12.072392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.750 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.072737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.072782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.073018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.073060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.073319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.073372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.073680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.073977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.074010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.074143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.074173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.074342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.074383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.074532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.074567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.074745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.074777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.074989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.075022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.075226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.075271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.075432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.075468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.075709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.075741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.076020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.076055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.076388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.076535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.076567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.076705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.076743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.077046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.077082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.077228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.077261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.077441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.077473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.077621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.077655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.077852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.077884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.078074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.078105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.078396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.078606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.078637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.078907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.079172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.079230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.079419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.079452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.079638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.079671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.079895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.079928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.080111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.080145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.080361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.080395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.080584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.080618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.080813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.080846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.081040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.081071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.081330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.081367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.751 [2024-11-20 10:06:12.081523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.751 [2024-11-20 10:06:12.081557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.751 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.081748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.081779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.082056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.082096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.082331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.082558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.082590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.082893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.082927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.083067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.083107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.083382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.083626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.083662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.083997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.084031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.084247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.084282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.084434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.084475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.084676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.084710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.084910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.084943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.085225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.085269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.085428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.085461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.085649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.085680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.085823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.085856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.086045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.086081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.086227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.086262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.086493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.086528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.086790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.086822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.087040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.087346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.087381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.087517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.087549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.087795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.087836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.088032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.088065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.088235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.088417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.088449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.088609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.088643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.088829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.088862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.089134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.089393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.089429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.089751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.089790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.752 [2024-11-20 10:06:12.090059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.752 [2024-11-20 10:06:12.090103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.752 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.090355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.090390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.090614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.090739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.090772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.091027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.091061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.091351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.091385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.091697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.091732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.092027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.092059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.092193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.092240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.092463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.092747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.092781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.093078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.093111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.093406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.093721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.093757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.094030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.094062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.094310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.094349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.094540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.094572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.094762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.094794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.095024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.095055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.095292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.095329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.095473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.095700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.095732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.095839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.096125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.096159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.096445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.096478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.096686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.096717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.096849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.096884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.097089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.097121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.097374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.097409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.097702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.097743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.097983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.098015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.098193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.098239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.753 qpair failed and we were unable to recover it. 00:27:38.753 [2024-11-20 10:06:12.098435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.753 [2024-11-20 10:06:12.098468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.098746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.098781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.099027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.099058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.099327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.099364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.099644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.099905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.099936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.100351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.100578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.100616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.100890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.101135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.101170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.101327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.101360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.101604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.101637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.101813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.101851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.102057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.102088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.102328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.102548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.102579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.102769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.102803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.102998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.103030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.103230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.103264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.103463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.103497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.103647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.103681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.103937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.103970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.104156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.104187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.104385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.104419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.104634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.104667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.104793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.105091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.105124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.105396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.105433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.105695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.105728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.106041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.106076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.106370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.106405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.106625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.106658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.106779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.106812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.107090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.107125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.754 qpair failed and we were unable to recover it. 00:27:38.754 [2024-11-20 10:06:12.107335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.754 [2024-11-20 10:06:12.107377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.107650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.107692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.108006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.108041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.108330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.108364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.108564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.108895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.108928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.109229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.109264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.109463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.109499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.109793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.109824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.110074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.110107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.110315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.110352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.110488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.110520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.110718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.110750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.110944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.110976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.111246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.111286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.111487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.111520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.111714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.111746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.112039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.112074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.112270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.112305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.112483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.112515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.112644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.112676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.112826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.112859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.113121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.113154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.113297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.113528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.113559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.113806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.113840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.114056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.114089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.114337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.114379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.114509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.114544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.114726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.115095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.115128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.115380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.115418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.115601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.115633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.115763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.115795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.115996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.116360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.116396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.116657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.116690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.116867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.755 [2024-11-20 10:06:12.116898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.755 qpair failed and we were unable to recover it. 00:27:38.755 [2024-11-20 10:06:12.117182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.117231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.117480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.117512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.117778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.117815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.117978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.118018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.118200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.118246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.118450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.118482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.118728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.118763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.119009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.119042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.119328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.119363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.119577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.119613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.119803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.119836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.120035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.120069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.120327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.120363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.120561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.120594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.120785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.120818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.121004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.121036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.121347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.121384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.121683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.121715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.121985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.122025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.122227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.122260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.122525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.122558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.122823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.122858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.123079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.123113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.123297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.123332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.123540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.123572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.123796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.123831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.124048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.124080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.124328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.124529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.124573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.124848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.124880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.125129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.125167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.125313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.125359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.125507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.125539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.125775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.125807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.126081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.126114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.126401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.126438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.126635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.126668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.756 qpair failed and we were unable to recover it. 00:27:38.756 [2024-11-20 10:06:12.126876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.756 [2024-11-20 10:06:12.126909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.127040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.127074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.127267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.127302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.127504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.127535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.127727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.127764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.128056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.128092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.128400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.128434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.128711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.128746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.129029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.129063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.129243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.129277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.129413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.129449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.129739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.130037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.130326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.130369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.130663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.130696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.130981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.131014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.131328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.131523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.131555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.131684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.131717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.131989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.132033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.132313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.132348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.132568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.132601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.132792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.132829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.133111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.133148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.133326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.133359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.133626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.133658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.133885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.133919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.134149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.134181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.134335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.134642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.134676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.134956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.134988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.135237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.135271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.135430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.135465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.135616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.135649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.135899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.135938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.136153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.136188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.136470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.136506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.136698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.757 [2024-11-20 10:06:12.136731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.757 qpair failed and we were unable to recover it. 00:27:38.757 [2024-11-20 10:06:12.136918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.136951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.137228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.137265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.137472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.137505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.137699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.137730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.137925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.137960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.138245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.138280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.138554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.138586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.138790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.138832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.139028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.139062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.139267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.139302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.139572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.139613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.139827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.139861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.140058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.140091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.140348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.140382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.140636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.140843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.140876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.141093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.141126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.141389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.141425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.141626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.141658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.141780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.141813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.142081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.142115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.142312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.142514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.142548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.142697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.142729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.142931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.142971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.143159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.143195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.143409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.143442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.143636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.143669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.143913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.143949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.144200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.144265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.144414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.144445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.144646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.144686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.144904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.758 [2024-11-20 10:06:12.144937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.758 qpair failed and we were unable to recover it. 00:27:38.758 [2024-11-20 10:06:12.145144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.145175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.145349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.145382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.145594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.145778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.145810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.146103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.146136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.146335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.146607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.146641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.146861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.146894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.147025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.147058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.147281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.147318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.147501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.147534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.147737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.147769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.147992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.148235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.148270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.148552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.148585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.148869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.148904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.149180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.149229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.149507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.149539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.149861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.149897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.150107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.150140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.150293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.150328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.150581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.150616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.150847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.151091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.151124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.151333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.151370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.151584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.151617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.151968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.152001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.152306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.152463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.152496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.152630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.152662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.152950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.152985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.153235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.153279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.153472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.153506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.153735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.153769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.153965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.153997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.154268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.154307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.759 [2024-11-20 10:06:12.154575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.759 [2024-11-20 10:06:12.154608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.759 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.154917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.154950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.155197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.155250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.155560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.155593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.155801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.155834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.156118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.156153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.156393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.156427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.156572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.156604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.156758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.156802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.157044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.157078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.157346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.157381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.157588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.157623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.157867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.157899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.158186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.158231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.158383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.158418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.158604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.158637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.158905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.158938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.159135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.159173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.159363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.159398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.159604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.159637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.159851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.160183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.160232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.160480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.160707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.160743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.161031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.161065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.161282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.161316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.161541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.161577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.161857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.161889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.162047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.162336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.162374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.162528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.162561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.162846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.162879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.163134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.163170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.163424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.163458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.163619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.163651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.163880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.163916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.164060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.164102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.164314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.164349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.760 qpair failed and we were unable to recover it. 00:27:38.760 [2024-11-20 10:06:12.164503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.760 [2024-11-20 10:06:12.164536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.164738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.164777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.165038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.165070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.165323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.165359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.165638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.165673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.166020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.166053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.166275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.166309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.166515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.166735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.166768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.166982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.167014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.167271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.167316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.167472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.167505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.167739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.167773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.167999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.168035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.168292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.168428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.168460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.168687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.168990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.169024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.169243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.169278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.169483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.169515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.169745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.169783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.169988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.170020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.170158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.170189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.170366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.170401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.170620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.170663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.170816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.170857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.171041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.171284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.171319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.171599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.171636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.171793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.171824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.172050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.172083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.172629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.172929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.173114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.173157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.173451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.173486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.173648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.173680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.174017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.174055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.761 qpair failed and we were unable to recover it. 00:27:38.761 [2024-11-20 10:06:12.174259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.761 [2024-11-20 10:06:12.174294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.174515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.174549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.174756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.174789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.175002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.175036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.175232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.175268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.175466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.175498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.175647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.175680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.175943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.175979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.176178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.176239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.176398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.176429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.176640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.176674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.176940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.176974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.177170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.177213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.177494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.177531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.177739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.177773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.177997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.178030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.178304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.178340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.178508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.178664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.178695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.178906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.178940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.179078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.179118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.179329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.179364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.179674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.179708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.179853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.179889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.180165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.180198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.180381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.180416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.180615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.180908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.180942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.181225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.181266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.181562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.181598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.181752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.181785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.182046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.182080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.182294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.182331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.182554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.182590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.182870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.182901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.183165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.183200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.183422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.183457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.762 [2024-11-20 10:06:12.183601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.762 [2024-11-20 10:06:12.183633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.762 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.183930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.183962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.184170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.184237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.184468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.184501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.184708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.184741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.185019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.185274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.185309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.185470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.185503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.185756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.185795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.186095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.186303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.186338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.186473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.186505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.186709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.186745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.186954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.186986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.187221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.187256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.187510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.187545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.187783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.187985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.188017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.188292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.188345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.188540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.188573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.188777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.188810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.189001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.189033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.189322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.189516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.189549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.189709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.189742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.189941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.189973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.190297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.190334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.190588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.190621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.190899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.190934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.191241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.191399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.191431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.191660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.191706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.763 [2024-11-20 10:06:12.191973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.763 [2024-11-20 10:06:12.192006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.763 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.192287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.192322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.192595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.192631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.192876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.192909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.193123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.193157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.193438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.193471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.193772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.193809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.194076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.194109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.194313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.194562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.194597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.194878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.194912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.195168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.195443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.195480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.195680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.195713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.195944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.195977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.196182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.196233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.196389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.196421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.196624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.196657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.196873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.196905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.197226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.197263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.197478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.197716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.197751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.198015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.198052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.198283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.198318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.198477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.198508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.198660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.198693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.198908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.198944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.199128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.199166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.199384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.199419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.199627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.199667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.199909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.199943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.200142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.200175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.200425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.200461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.200672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.200705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.200831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.200863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.201117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.201148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.201459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.201497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.201758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.764 [2024-11-20 10:06:12.201792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.764 qpair failed and we were unable to recover it. 00:27:38.764 [2024-11-20 10:06:12.201998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.202327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.202365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.202573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.202606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.202768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.202800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.203111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.203147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.203395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.203429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.203641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.203673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.203896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.203932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.204178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.204225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.204440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.204472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.204671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.205017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.205051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.205315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.205350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.205576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.205620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.205849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.205881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.206160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.206194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.206484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.206534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.206698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.206731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.207009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.207042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.207330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.207367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.207643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.207676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.207813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.207845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.208048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.208083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.208325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.208361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.208566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.208598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.208808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.208841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.209047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.209091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.209294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.209330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.209497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.209528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.209826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.209865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.210131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.210167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.210423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.210457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.210678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.210712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.210978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.211013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.211321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.211499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.211752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.211788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.765 [2024-11-20 10:06:12.212068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.765 [2024-11-20 10:06:12.212101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.765 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.212430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.212472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.212703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.212737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.213055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.213087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.213393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.213429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.213709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.213743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.214038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.214071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.214242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.214280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.214604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.214637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.214787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.214820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.215055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.215091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.215402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.215436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.215718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.215751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.216058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.216346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.216382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.216574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.216607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.216907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.216942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.217268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.217503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.217546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.217796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.217829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.218053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.218093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.218332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.218371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.218678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.218879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.218911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.219121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.219154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.219395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.219431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.219586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.219618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.219852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.219885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.220112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.220147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.220399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.220435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.220691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.220723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.221086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.221251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.221528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.221561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.221778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.221819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.222040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.222076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.766 [2024-11-20 10:06:12.222362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.766 [2024-11-20 10:06:12.222398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.766 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.222539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.222572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.222792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.222827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.223137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.223170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.223421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.223511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.223771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.223815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.224104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.224137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.224479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.224670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.224703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.224885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.225170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.225213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.225475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.225800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.225848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.226081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.226129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.226388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.226439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.226606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.226654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.226912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.226960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.227174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.227235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.227516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.227560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.227807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.227840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.228036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.228069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.228387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.228422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.228620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.228653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.228938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.228971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.229256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.229306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.229556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.229603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.229913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.229959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.230200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.230261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.230499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.230547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.230725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.230769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.231001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.231048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.231231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.231285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.231580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.231613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.231828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.231861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.232065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.232097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.232406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.767 [2024-11-20 10:06:12.232441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.767 qpair failed and we were unable to recover it. 00:27:38.767 [2024-11-20 10:06:12.232671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.232704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.232927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.232970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.233139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.233186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.233547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.233595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.233878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.233927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.234176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.234232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.234468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.234514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.234836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.234885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.235169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.235217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.235501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.235534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.235758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.235790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.236050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.236261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.236296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.236522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.236844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.236894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.237254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.237311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.237548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.237596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.237941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.238230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.238280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.238588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.238639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.238841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.239107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.239140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.239348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.239383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.239525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.239557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.239704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.239736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.240008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.240039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.240348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.240398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.240634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.240683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.241010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.241058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.241361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.241707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.241754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.768 [2024-11-20 10:06:12.242044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.768 [2024-11-20 10:06:12.242090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.768 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.242340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.242382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.242612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.242646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.242915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.242947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.243169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.243211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.243442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.243597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.243629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.243756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.243788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.244026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.244075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.244295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.244343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.244514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.244561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.244808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.244854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.245069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.245117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.245382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.245678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.245725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.246041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.246090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.246363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.246412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.246677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.246725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.247047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.247090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.247252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.247287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.247500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.247532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.247771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.248046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.248079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.248287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.248321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.248558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.248598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.248805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.248852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.249024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.249073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.249376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.249611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.249939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.250288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.250337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.250508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.250788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.250834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.251054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.251094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.251304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.251337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.251472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.769 [2024-11-20 10:06:12.251505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.769 qpair failed and we were unable to recover it. 00:27:38.769 [2024-11-20 10:06:12.251757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.251791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.251973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.252006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.252307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.252341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.252623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.252661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.252971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.253018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.253345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.253394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.253634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.253682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.253999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.254046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.254362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.254411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.254670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.254716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.255075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.255221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.255254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.255440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.255472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.255722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.255754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.255965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.255997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.256392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.256474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.256714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.256752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.256952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.256985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.257579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.257613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.257878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.257915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.258121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.258156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.258408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.258444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.258701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.258734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.259041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.259278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.259314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.259499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.259705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.259741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.260045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.260078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.260388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.260432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.260734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.260767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.261076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.261108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.261391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.261428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.261658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.261691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.261885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.261918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.262224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.262262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.770 [2024-11-20 10:06:12.262519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.770 [2024-11-20 10:06:12.262552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.770 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.262835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.262872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.263091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.263127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.263331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.263366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.263552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.263586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.263790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.263825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.264156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.264460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.264500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.264730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.264765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.264971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.265004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.265309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.265350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.265649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.265681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.265894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.265928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.266214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.266260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.266543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.266576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.266765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.266798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.267108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.267391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.267437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.267733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.267767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.267955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.267988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.268272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.268307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.268513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.268545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.268800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.268833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.269034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.269066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.269289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.269324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.269561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.269593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.269799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.269831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.270112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.270145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.270396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.270680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.270712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.270982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.271015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.271282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.271316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.271632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.271985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.272017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.272282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.272317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.272524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.272556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.771 qpair failed and we were unable to recover it. 00:27:38.771 [2024-11-20 10:06:12.272831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.771 [2024-11-20 10:06:12.272863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.273054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.273086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.273367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.273402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.273609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.273641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.273894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.273926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.274212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.274248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.274474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.274508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.274763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.274795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.275053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.275086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.275295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.275330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.275492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.275525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.275813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.275846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.276108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.276141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.276361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.276396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.276541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.276573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.276754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.276786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.276922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.276954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.277182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.277229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.277428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.277461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.277666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.277698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.277902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.277935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.278219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.278252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.278460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.278494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.278779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.279030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.279062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.279348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.279383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.279688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.279720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.279981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.280013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.280318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.280352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.280606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.280640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.280949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.280981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.281257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.281291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.281553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.281586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.281891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.281924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.282138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.282171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.282381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.772 [2024-11-20 10:06:12.282416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.772 qpair failed and we were unable to recover it. 00:27:38.772 [2024-11-20 10:06:12.282598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.282630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.282819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.282851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.283098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.283137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.283339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.283375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.283643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.283675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.283975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.284008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.284281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.284316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.284528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.284561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.284765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.284796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.285004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.285037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.285224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.285260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.285464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.285496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.285776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.285808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.286091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.286124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.286320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.286354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.286558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.286591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.286801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.286834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.287031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.287062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.287250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.287285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.287543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.287576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.287794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.287826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.288110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.288142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.288428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.288701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.288733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.288986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.289018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.289328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.289530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.289563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.289864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.289897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.290150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.290182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.290476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.773 [2024-11-20 10:06:12.290510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.773 qpair failed and we were unable to recover it. 00:27:38.773 [2024-11-20 10:06:12.290810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.290842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.291107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.291139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.291290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.291325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.291595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.291628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.291831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.291862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.292065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.292098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.292284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.292573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.292605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.292908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.292941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.293212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.293245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.293475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.293508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.293762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.293794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.294049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.294082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.294382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.294427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.294707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.294739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.294923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.294954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.295229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.295263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.295466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.295498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.295772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.295804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.296006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.296039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.296244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.296279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:38.774 [2024-11-20 10:06:12.296474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.774 [2024-11-20 10:06:12.296506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:38.774 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.296844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.297149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.297180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.297478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.297511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.297832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.297864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.298136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.298169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.298464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.298497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.298679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.298712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.050 [2024-11-20 10:06:12.298981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-11-20 10:06:12.299014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.050 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.299318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.299352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.299536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.299568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.299842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.299875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.300156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.300187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.300474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.300507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.300734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.300767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.301045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.301078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.301298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.301333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.301582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.301614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.301919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.301951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.302089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.302127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.302383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.302718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.302750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.302966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.302999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.303270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.303585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.303617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.303865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.303898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.304103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.304136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.304417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.304450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.304732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.304765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.305051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.305082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.305370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.305683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.305715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.305907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.305940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.306224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.306259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.306474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.306507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.306784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.306816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.307105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.307137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.307402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.307446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.307642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.307673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.307876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.307910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.308117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.308150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.308461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.308498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.308778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.308811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.309091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.309126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.309439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.309474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.309740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-11-20 10:06:12.309772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.051 qpair failed and we were unable to recover it. 00:27:39.051 [2024-11-20 10:06:12.310051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.310086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.310378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.310413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.310611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.310643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.310926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.310962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.311241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.311275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.311504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.311540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.311783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.311816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.312017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.312050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.312168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.312211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.312502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.312536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.312742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.312775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.312957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.312990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.313249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.313287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.313573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.313607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.313878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.313917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.314198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.314247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.314511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.314545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.314777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.314810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.315113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.315148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.315430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.315465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.315670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.315711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.315906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.315942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.316229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.316265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.316519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.316552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.316857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.316892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.317096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.317130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.317327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.317362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.317582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.317617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.317828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.317862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.318157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.318190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.318416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.318452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.318714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.318747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.318952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.318986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.319182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.319254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.319385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.319417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.319719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.319753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.320044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.320300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.052 [2024-11-20 10:06:12.320337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.052 qpair failed and we were unable to recover it. 00:27:39.052 [2024-11-20 10:06:12.320615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.320647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.320941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.320977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.321254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.321289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.321575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.321615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.321887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.321921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.322214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.322249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.322566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.322839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.322874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.323160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.323193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.323491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.323526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.323825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.324098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.324131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.324389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.324632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.324665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.324883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.324915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.325154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.325402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.325438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.325777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.325847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.326183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.326255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.326626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.326881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.326931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.327315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.327550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.327597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.327907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.327956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.328275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.328537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.328570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.328807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.329108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.329140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.329423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.329456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.329739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.329788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.330098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.330157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.330484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.330532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.330847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.330894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.331213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.331263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.331592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.331636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.331942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.331976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.053 [2024-11-20 10:06:12.332255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.053 [2024-11-20 10:06:12.332290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.053 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.332574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.332608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.332887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.332920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.333121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.333153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.333358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.333407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.333710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.333757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.334087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.334393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.334442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.334760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.334808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.335034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.335081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.335384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.335426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.335579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.335612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.335893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.335925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.336132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.336165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.336475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.336510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.336770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.336803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.337148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.337197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.337520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.337813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.338079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.338125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.338467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.338517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.338789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.338865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.339033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.339070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.339270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.339306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.339514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.339545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.339820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.339852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.340086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.340117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.340399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.340435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.340716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.340764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.341095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.341465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.341515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.341849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.341897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.342225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.342274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.342569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.342611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.342827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.342872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.343151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.343183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.343478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.054 [2024-11-20 10:06:12.343991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.054 [2024-11-20 10:06:12.344032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.054 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.344363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.344413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.344736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.344782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.345083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.345133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.345400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.345449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.345740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.345787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.346094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.346144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.346404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.346453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.346794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.346842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.347177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.347238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.347561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.347607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.347931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.347981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.348196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.348263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.348592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.348640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.349000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.349236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.349287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.349584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.349630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.349938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.349986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.350166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.350215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.350501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.350534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.350803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.350836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.351060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.351106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.351322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.351357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.351650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.351682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.351879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.351911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.352278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.352593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.352900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.352947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.353258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.353306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.353589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.353637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.353890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.353933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.354292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.055 qpair failed and we were unable to recover it. 00:27:39.055 [2024-11-20 10:06:12.354588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.055 [2024-11-20 10:06:12.354627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.354906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.354936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.355199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.355240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.355438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.355468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.355739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.355775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.355950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.355980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.356228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.356266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.356592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.356635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.356936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.356978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.357278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.357325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.357627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.357669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.357948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.357992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.358321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.358357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.358606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.358828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.358858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.359156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.359185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.359488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.359517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.359697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.359739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.359995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.360038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.360279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.360323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.360631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.360982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.361024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.361244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.361288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.361565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.361603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.361891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.361921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.362119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.362149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.362430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.362463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.362710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.362740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.362918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.362948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.363273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.363563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.363605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.363890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.363968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.364285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.364327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.364615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.364653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.364924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.364961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.365226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.365261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.365462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.365495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.365792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.365828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.056 qpair failed and we were unable to recover it. 00:27:39.056 [2024-11-20 10:06:12.366093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.056 [2024-11-20 10:06:12.366125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.366424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.366461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.366748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.366782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.366978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.367011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.367266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.367300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.367599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.367632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.367901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.367934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.368153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.368186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.368516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.368549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.368838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.368870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.369092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.369125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.369397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.369432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.369745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.369778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.369984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.370017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.370271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.370305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.370636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.370923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.370956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.371218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.371252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.371403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.371436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.371636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.371669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.371951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.371989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.372226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.372260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.372530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.372564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.372817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.372849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.373053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.373085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.373304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.373340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.373567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.373600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.373876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.373909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.374245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.374522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.374554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.374832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.374864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.375159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.375192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.375484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.375518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.375793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.375825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.376078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.376111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.376312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.376346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.376624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.376657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.376862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.376894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.057 [2024-11-20 10:06:12.377195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.057 [2024-11-20 10:06:12.377264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.057 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.377542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.377576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.377847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.377879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.378177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.378224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.378507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.378541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.378808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.379059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.379092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.379288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.379322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.379530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.379562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.379814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.379847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.380147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.380180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.380491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.380792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.381081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.381114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.381420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.381455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.381715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.381748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.382037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.382070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.382327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.382595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.382628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.382854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.382887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.383164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.383196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.383486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.383520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.383825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.383857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.384071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.384339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.384374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.384576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.384608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.384816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.384848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.385100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.385133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.385328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.385363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.385571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.385603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.385887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.386087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.386120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.386399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.386434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.386689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.386723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.386976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.387009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.387263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.387297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.387516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.387549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.387758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.387792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.387908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.058 [2024-11-20 10:06:12.387940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.058 qpair failed and we were unable to recover it. 00:27:39.058 [2024-11-20 10:06:12.388249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.388284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.388565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.388599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.388875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.388907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.389247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.389525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.389557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.389769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.389801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.390006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.390039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.390299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.390333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.390587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.390620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.390804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.390837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.391110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.391143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.391440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.391487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.391693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.391725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.392001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.392034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.392225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.392259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.392573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.392606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.392873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.392904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.393186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.393230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.393532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.393565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.393837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.393869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.394071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.394103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.394381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.394416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.394616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.394649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.394831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.394863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.395077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.395109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.395373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.395408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.395845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.395877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.396152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.396185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.396391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.396427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.396628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.396660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.396934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.396967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.397222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.397256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.397518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.397847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.397880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.398170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.398213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.398483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.059 [2024-11-20 10:06:12.398516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.059 qpair failed and we were unable to recover it. 00:27:39.059 [2024-11-20 10:06:12.398796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.398829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.399113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.399145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.399431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.399466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.399670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.399702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.399960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.399993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.400293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.400327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.400552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.400584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.400854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.400887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.401087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.401119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.401367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.401402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.401534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.401565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.401848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.401880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.402144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.402176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.402479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.402778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.402811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.403054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.403093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.403349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.403383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.403683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.403715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.404007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.404040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.404315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.404350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.404642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.404675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.404877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.404910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.405220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.405254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.405509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.405543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.405845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.405877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.406169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.406211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.406492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.406525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.406802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.406834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.407119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.407151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.407443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.407477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.407749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.407780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.408079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.408113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.408409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.408444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.408713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.408746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.408972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.060 [2024-11-20 10:06:12.409004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.060 qpair failed and we were unable to recover it. 00:27:39.060 [2024-11-20 10:06:12.409212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.409246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.409529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.409562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.409772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.409805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.410082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.410114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.410403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.410438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.410633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.410665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.410942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.410976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.411261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.411301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.411575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.411791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.411823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.412031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.412063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.412269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.412303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.412557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.412590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.412893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.412925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.413145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.413179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.413479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.413513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.413797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.413830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.414031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.414262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.414296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.414577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.414609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.414891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.414924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.415217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.415251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.415525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.415558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.415843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.415875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.416160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.416193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.416484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.416517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.416816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.416849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.417043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.417076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.417336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.417369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.417622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.417655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.417837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.417870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.418121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.418153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.418414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.418448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.418702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.418736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.418919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.418951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.419239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.419274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.419536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.419568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.419812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.061 [2024-11-20 10:06:12.419845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.061 qpair failed and we were unable to recover it. 00:27:39.061 [2024-11-20 10:06:12.420149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.420181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.420387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.420420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.420703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.420735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.420935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.420968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.421282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.421318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.421493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.421671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.421975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.422008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.422291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.422521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.422553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.422824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.422863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.423122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.423153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.423420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.423454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.423731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.423763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.424053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.424263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.424297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.424477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.424509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.424782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.424814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.425069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.425100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.425285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.425319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.425454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.425486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.425748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.425780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.425977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.426010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.426282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.426315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.426607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.426640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.426912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.426945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.427241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.427275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.427544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.427577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.427867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.427900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.428099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.428367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.428401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.428602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.428634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.428891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.428923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.429229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.429263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.429540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.429577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.429742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.429773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.429969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.430001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.430211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.430251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.062 [2024-11-20 10:06:12.430525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.062 [2024-11-20 10:06:12.430558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.062 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.430830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.430862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.431160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.431193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.431483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.431516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.431721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.431754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.432007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.432040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.432243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.432277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.432551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.432584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.432784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.432817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.433102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.433135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.433437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.433471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.433675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.433707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.433926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.433958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.434236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.434270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.434492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.434807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.434840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.435122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.435153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.435439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.435473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.435811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.435844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.435989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.436020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.436323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.436357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.436620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.436653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.436855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.436889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.437175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.437217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.437513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.437546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.437829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.437860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.438152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.438185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.438386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.438419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.438698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.438731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.438988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.439021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.439223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.439257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.439531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.439564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.439823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.439856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.440043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.440352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.440387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.440657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.440689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.440895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.440927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.063 [2024-11-20 10:06:12.441108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.063 qpair failed and we were unable to recover it. 00:27:39.063 [2024-11-20 10:06:12.441385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.441419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.441695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.441726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.442063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.442350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.442383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.442656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.442688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.442904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.442937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.443214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.443248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.443512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.443545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.443844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.443876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.444152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.444185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.444477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.444510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.444789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.445010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.445564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.445597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.445780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.445814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.446014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.446048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.446339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.446373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.446598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.446629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.446910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.446943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.447234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.447268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.447486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.447519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.447701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.447734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.447987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.448020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.448240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.448273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.448472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.448504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.448806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.448839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.449061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.449093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.449357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.449392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.449626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.449658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.449887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.449920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.450146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.450178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.450503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.450537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.450761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-11-20 10:06:12.450794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.064 qpair failed and we were unable to recover it. 00:27:39.064 [2024-11-20 10:06:12.451057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.451090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.451387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.451422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.451711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.451743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.451960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.452272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.452306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.452594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.452627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.452900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.452933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.453197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.453243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.453443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.453476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.453789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.453822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.454081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.454113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.454415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.454450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.454640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.454672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.454874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.454906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.455103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.455135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.455412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.455446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.455726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.455758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.456043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.456271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.456305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.456561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.456593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.456897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.456931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.457238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.457271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.457470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.457503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.457773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.457806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.458007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.458040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.458317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.458628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.458661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.458877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.458910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.459115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.459147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.459301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.459336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.459538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.459570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.459754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.459786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.460003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.460035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.460332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.460578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.460610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.460861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.460893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.461149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.461188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.461453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-11-20 10:06:12.461487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.065 qpair failed and we were unable to recover it. 00:27:39.065 [2024-11-20 10:06:12.461718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.461750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.461994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.462027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.462307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.462341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.462628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.463258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.463292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.463566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.463599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.463885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.463917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.464209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.464242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.464521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.464554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.464905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.464937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.465097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.465129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.465402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.465436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.465739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.465772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.466034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.466067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.466371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.466405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.466667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.466700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.466954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.466987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.467245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.467279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.467572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.467878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.467910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.468146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.468358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.468392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.468587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.468619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.468895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.468927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.469138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.469171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.469467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.469501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.469751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.469783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.470036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.470069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.470382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.470415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.470714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.470747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.471021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.471054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.471349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.471383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.471521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.471554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.471695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.471728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.471941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.471973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.472178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.472218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.066 qpair failed and we were unable to recover it. 00:27:39.066 [2024-11-20 10:06:12.472415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-11-20 10:06:12.472446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.472672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.472704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.472979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.473012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.473217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.473250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.473552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.473584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.473787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.473819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.474099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.474131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.474334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.474369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.474575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.474607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.474907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.474940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.475214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.475248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.475568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.475836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.475868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.476213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.476412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.476444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.476727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.476760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.476943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.477245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.477278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.477561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.477593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.477827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.477860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.478133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.478166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.478460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.478765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.478798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.479083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.479115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.479395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.479430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.479650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.479831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.479863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.480118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.480150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.480363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.480403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.480684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.480716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.480993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.481026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.481374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.481648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.481681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.481887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.481920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.482211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.482244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.482525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.482558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.482833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.482865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.483166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.483199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.067 [2024-11-20 10:06:12.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.067 [2024-11-20 10:06:12.483529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.067 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.483807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.483840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.484120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.484152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.484440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.484474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.484767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.484800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.485071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.485103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.485407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.485441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.485577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.485609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.485888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.485921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.486057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.486089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.486346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.486379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.486562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.486594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.486876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.486909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.487195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.487239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.487509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.487541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.487822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.487854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.488143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.488176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.488457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.488489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.488774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.488807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.489091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.489124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.489368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.489402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.489705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.489737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.490028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.490061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.490337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.490371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.490662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.490695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.490968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.491001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.491189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.491232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.491435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.491468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.491657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.491690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.491969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.492002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.492230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.492264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.492458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.492497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.492703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.492736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.492997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.493030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.493285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.493319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.493576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.493609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.493909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.493942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.494230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.494263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.494375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.068 [2024-11-20 10:06:12.494408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.068 qpair failed and we were unable to recover it. 00:27:39.068 [2024-11-20 10:06:12.494685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.494717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.494982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.495015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.495299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.495334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.495581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.495797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.495829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.496131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.496163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.496455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.496490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.496706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.496739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.496939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.496972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.497174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.497217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.497500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.497532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.497725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.497756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.498101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.498133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.498417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.498671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.498703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.498992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.499024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.499300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.499335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.499528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.499559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.499861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.499894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.500095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.500133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.500389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.500423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.500676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.500708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.500959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.500992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.501260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.501294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.501606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.501857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.501890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.502194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.502247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.502434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.502721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.502753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.502970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.503003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.503267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.503302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.503494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.503526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.503803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.503835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.503966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.069 [2024-11-20 10:06:12.504000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.069 qpair failed and we were unable to recover it. 00:27:39.069 [2024-11-20 10:06:12.504278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.504312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.504448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.504481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.504765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.504798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.505069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.505101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.505387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.505422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.505613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.505645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.505903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.505936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.506185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.506236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.506532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.506565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.506866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.506898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.507166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.507198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.507429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.507463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.507718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.507750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.508042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.508075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.508355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.508612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.508644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.508913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.508946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.509178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.509218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.509475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.509508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.509782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.509815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.510097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.510130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.510412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.510446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.510731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.510764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.511014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.511048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.511235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.511490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.511523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.511775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.511813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.512116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.512149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.512346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.512379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.512562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.512595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.512873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.512905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.513143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.513175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.513457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.513490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.513769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.513801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.514085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.514116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.514307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.514613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.514645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.514924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.514957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.070 qpair failed and we were unable to recover it. 00:27:39.070 [2024-11-20 10:06:12.515247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.070 [2024-11-20 10:06:12.515281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.515554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.515876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.515909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.516185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.516227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.516435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.516468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.516719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.516751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.517050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.517083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.517333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.517367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.517661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.517693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.517943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.517975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.518241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.518275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.518563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.518595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.518832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.519135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.519168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.519460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.519494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.519723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.519761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.520014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.520047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.520254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.520289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.520425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.520457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.520764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.521041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.521073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.521364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.521398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.521607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.521640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.521861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.521893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.522174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.522223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.522494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.522528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.522755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.522787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.523040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.523073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.523329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.523363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.523586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.523619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.523755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.523788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.524090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.524122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.524303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.524337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.524594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.524626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.524807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.524840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.525120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.525152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.525430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.525463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.525765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.071 [2024-11-20 10:06:12.526018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.071 [2024-11-20 10:06:12.526051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.071 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.526265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.526299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.526585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.526617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.526899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.526932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.527155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.527187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.527502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.527534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.527817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.527849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.528133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.528165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.528406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.528439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.528647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.528680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.528955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.528988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.529189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.529234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.529418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.529450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.529634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.529667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.529869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.529901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.530103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.530136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.530334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.530368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.530505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.530537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.530807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.530846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.531099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.531131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.531313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.531346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.531538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.531569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.531752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.532093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.532349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.532383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.532587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.532620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.532816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.532848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.533127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.533159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.533472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.533506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.533786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.533817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.534019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.534052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.534309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.534344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.534607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.534639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.534935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.535240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.535274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.535480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.535513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.535802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.535835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.536085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.536117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.536372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.072 [2024-11-20 10:06:12.536406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.072 qpair failed and we were unable to recover it. 00:27:39.072 [2024-11-20 10:06:12.536711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.536743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.537019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.537052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.537335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.537370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.537653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.537959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.537992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.538259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.538293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.538591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.538629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.538921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.538953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.539279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.539502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.539535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.539750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.539783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.540062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.540094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.540378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.540412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.540617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.540650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.540849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.541081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.541115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.541394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.541428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.541612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.541644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.541857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.541890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.542084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.542116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.542327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.542361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.542553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.542585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.542783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.542815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.543102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.543135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.543421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.543455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.543734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.543767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.544020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.544302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.544335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.544486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.544518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.544696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.544729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.545002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.545316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.545350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.545557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.545590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.073 [2024-11-20 10:06:12.545888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.073 [2024-11-20 10:06:12.545921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.073 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.546072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.546104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.546308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.546341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.546620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.546653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.546934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.546966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.547152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.547184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.547398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.547430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.547686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.547720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.547905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.547936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.548222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.548256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.548514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.548547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.548764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.548796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.549068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.549099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.549304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.549337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.549565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.549603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.549829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.549862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.550144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.550176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.550401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.550435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.550714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.550745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.550943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.550976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.551171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.551214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.551493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.551526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.551639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.551671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.551870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.551903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.552087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.552372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.552406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.552689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.552722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.553003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.553036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.553322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.553358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.553630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.553662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.553951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.553984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.554268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.554302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.554449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.554483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.555029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.555062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.555292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.555327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.555653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.555686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.074 [2024-11-20 10:06:12.556015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.074 qpair failed and we were unable to recover it. 00:27:39.074 [2024-11-20 10:06:12.556284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.556320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.556590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.556622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.556753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.556786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.557085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.557117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.557315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.557353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.557541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.557575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.557876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.557909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.558173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.558217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.558448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.558501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.558811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.558858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.559112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.559158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.559502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.559550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.559765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.559798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.560094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.560127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.560412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.560448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.560597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.560630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.560884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.560917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.561183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.561237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.561501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.561550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.561850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.561897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.562130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.562178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.562507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.562557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.562921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.562967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.563230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.563285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.563560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.563595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.563907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.563941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.564229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.564270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.564470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.564505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.564759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.564791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.565047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.565096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.565354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.565404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.565637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.565685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.565913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.565962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.566259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.566310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.566630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.566678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.566912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.566963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.567190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.567238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.567494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.075 [2024-11-20 10:06:12.567527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.075 qpair failed and we were unable to recover it. 00:27:39.075 [2024-11-20 10:06:12.567663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.567695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.567956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.567988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.568220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.568253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.568484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.568517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.568814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.568863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.569189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.569253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.569566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.569627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.569990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.570337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.570388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.570692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.570733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.571026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.571279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.571313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.571573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.571606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.571807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.571840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.572041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.572073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.572244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.572290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.572548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.572596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.572904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.572949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.573184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.573244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.573444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.573491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.573810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.573856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.574172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.574240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.574532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.574570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.574722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.574964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.574996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.575179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.575224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.575437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.575469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.575711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.575746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.576021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.576069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.576434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.576740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.576789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.577104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.577152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.577428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.577477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.577763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.578086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.578120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.578392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.578426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.578711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.578744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.579021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.579054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.076 [2024-11-20 10:06:12.579290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.076 [2024-11-20 10:06:12.579334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.076 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.579591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.579637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.579949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.579996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.580228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.580277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.580590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.580638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.580957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.581004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.581259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.581314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.581619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.581655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.581883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.581916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.582124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.582165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.582511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.582545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.582839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.582872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.583149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.583197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.583520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.583567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.583894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.583944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.584162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.584219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.584472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.584519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.584833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.584884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.585117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.585152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.585461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.585495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.585748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.585781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.586080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.586112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.586389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.586423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.586717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.586764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.587014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.587061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.587356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.587404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.587651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.587701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.587918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.587965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.588232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.588519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.588566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.588861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.588900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.589173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.589217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.589453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.589485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.589700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.590009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.590295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.590330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.590649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.590706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.591015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.591062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.591305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.077 [2024-11-20 10:06:12.591355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.077 qpair failed and we were unable to recover it. 00:27:39.077 [2024-11-20 10:06:12.591667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.591712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.592047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.592093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.592418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.592470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.592791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.592838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.593161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.593224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.593548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.593596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.593813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.593858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.594282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.594590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.594626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.594862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.594894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.595169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.595216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.595500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.595533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.595801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.595834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.596137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.596184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.596512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.596558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.596879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.596929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.597250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.597299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.597640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.597685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.597994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.598033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.598314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.598346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.598631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.598661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.598943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.598973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.599161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.599192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.599415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.599445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.599765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.599810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.600059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.600102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.600395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.600443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.600665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.600708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.601001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.601044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.601267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.601317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.601551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.601739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.601769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.601980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.078 [2024-11-20 10:06:12.602009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.078 qpair failed and we were unable to recover it. 00:27:39.078 [2024-11-20 10:06:12.602309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.602340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.602611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.602641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.602940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.603273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.603319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.603624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.603669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.604000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.604055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.604312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.604357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.604565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.604607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.604828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.604873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.605160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.605452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.605483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.605731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.606087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.606116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.606385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.606417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.606761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.607045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.607087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.607364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.607732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.607779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.608083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.608131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.608401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.608455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.608769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.609010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.609044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.609319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.609352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.609610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.609642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.609944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.609979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.610260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.610310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.610620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.610667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.610904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.610950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.611251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.611301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.611592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.611639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.611886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.611933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.612226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.079 [2024-11-20 10:06:12.612555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.079 [2024-11-20 10:06:12.612597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.079 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.612784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.612816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.613099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.613458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.613493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.613773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.613818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.614151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.614198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.614474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.614522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.614841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.614890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.615143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.615191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.615535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.615582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.615910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.615951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.616142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.616175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.616476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.616508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.616712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.616746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.617011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.617089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.617491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.617777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.617812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.618079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.618112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.618410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.618446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.618749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.619034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.619067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.619323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.619597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.619887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.619920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.620234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.620437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.620745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.620778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.621058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.621101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.621246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.621280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.621567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.621600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.621782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.621815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.622065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.622098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.349 [2024-11-20 10:06:12.622398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.349 [2024-11-20 10:06:12.622432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.349 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.622718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.622751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.622953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.622985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.623193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.623237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.623492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.623525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.623662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.623694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.623969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.624002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.624232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.624265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.624565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.624599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.624864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.624897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.625105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.625138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.625406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.625440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.625726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.625759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.626035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.626067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.626221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.626255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.626534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.626568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.626890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.627147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.627349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.627383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.627636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.627669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.627928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.627961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.628263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.628299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.628651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.628725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.629027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.629063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.629342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.629377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.629614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.629647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.629922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.630220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.630257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.630486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.630519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.630719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.630752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.630968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.631000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.631275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.631310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.631623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.631654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.631863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.631896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.632078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.632112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.632387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.632710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.632743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.632955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.632988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.633241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.633275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.633585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.633618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.633904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.633936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.634141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.634174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.634421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.634678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.634710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.634985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.635018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.635275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.635310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.635568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.635599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.635876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.635909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.636163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.636195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.636510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.636798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.636830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.637092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.637126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.637309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.637342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.637620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.637653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.637859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.637892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.638031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.638064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.638377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.638651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.638684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.638974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.639007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.639320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.639606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.639639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.639851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.639884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.640172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.640214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.640420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.640454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.640660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.640692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.640888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.640921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.641198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.641240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.641391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.641425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.641728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.642024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.642058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.642362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.642397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.350 qpair failed and we were unable to recover it. 00:27:39.350 [2024-11-20 10:06:12.642660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.350 [2024-11-20 10:06:12.642693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.642955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.642988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.643288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.643323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.643589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.643621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.643824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.643862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.644116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.644149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.644445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.644479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.644687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.644719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.644949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.644982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.645236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.645270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.645530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.645562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.645903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.646121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.646382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.646416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.646703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.646736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.646948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.646981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.647188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.647242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.647469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.647502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.647643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.647676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.647984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.648184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.648229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.648397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.648594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.648627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.648885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.648918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.649169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.649213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.649520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.649553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.649787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.649820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.650042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.650076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.650281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.650315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.650618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.650651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.650916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.650949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.651232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.651266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.651472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.651505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.651643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.651675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.651954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.652061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.652382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.652417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.652697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.652731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.653012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.653045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.653331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.653366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.653646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.653679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.653929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.653962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.654225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.654259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.654560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.654593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.351 qpair failed and we were unable to recover it. 00:27:39.351 [2024-11-20 10:06:12.654870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.351 [2024-11-20 10:06:12.654902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.655184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.655241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.655379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.655412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.655548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.655579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.655791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.655824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.656022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.656055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.656310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.656563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.656894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.656926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.657224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.657259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.657532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.657566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.657780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.657813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.658088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.658121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.658377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.658411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.658613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.658942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.658977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.659227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.659261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.659472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.659505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.659786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.659818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.660017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.660049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.660327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.660362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.660651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.660684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.660879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.660912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.661140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.661172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.661441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.661662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.661695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.662010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.662043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.662306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.662648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.662721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.662955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.352 [2024-11-20 10:06:12.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.352 qpair failed and we were unable to recover it. 00:27:39.352 [2024-11-20 10:06:12.663278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.663314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.663591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.663625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.663880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.663915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.664167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.664200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.664507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.664777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.664812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.665023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.665055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.665397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.665574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.665610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.665866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.665899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.666176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.666223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.666448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.666485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.666703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.666735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.667039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.667074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.667338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.667374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.667657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.667692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.667975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.668011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.668289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.668323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.668550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.668584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.668866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.668899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.669099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.669132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.669386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.669423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.669636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.669669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.669960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.670222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.670258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.670545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.670579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.670871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.670904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.671129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.671164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.671458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.671784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.671820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.672094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.672127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.672353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.672389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.672595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.672630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.353 [2024-11-20 10:06:12.672901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.353 [2024-11-20 10:06:12.672934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.353 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.673187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.673238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.673482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.673690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.673723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.673871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.673904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.674191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.674245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.674452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.674488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.674703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.674736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.674958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.674990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.675280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.675558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.675592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.675876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.675908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.676061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.676097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.676234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.676269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.676536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.676568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.676778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.676810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.677081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.677117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.677423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.677457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.677738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.677773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.678062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.678098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.678374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.678408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.678691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.678726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.679005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.679038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.679158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.679192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.679461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.679500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.679789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.679822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.680093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.680126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.680437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.680474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.680701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.680734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.680868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.680901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.681100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.681142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.681446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.681482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.681769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.681802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.682103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.682138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.682421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.682456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.682714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.682747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.354 [2024-11-20 10:06:12.683041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.354 [2024-11-20 10:06:12.683077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.354 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.683303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.683337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.683548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.683582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.683865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.683900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.684034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.684067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.684370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.684404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.684632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.684668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.684925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.684958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.685259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.685293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.685556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.685598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.685890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.686188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.686246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.686526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.686562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.686821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.686854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.687058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.687090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.687353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.687390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.687599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.687632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.687770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.687801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.688098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.688133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.688333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.688368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.688676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.688707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.688910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.688944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.689232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.689267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.689567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.689600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.689888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.689923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.690104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.690137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.690413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.690446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.690653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.690693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.690983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.691016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.691247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.691282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.691554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.691589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.691874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.691907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.692186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.692248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.692510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.692545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.692819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.692853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.693136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.693170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.693465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.355 [2024-11-20 10:06:12.693501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.355 qpair failed and we were unable to recover it. 00:27:39.355 [2024-11-20 10:06:12.693766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.693799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.693996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.694300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.694334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.694620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.694653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.694935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.694970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.695230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.695264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.695414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.695448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.695729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.695764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.696020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.696053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.696338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.696380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.696587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.696622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.696848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.696881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.697074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.697113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.697374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.697621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.697654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.697853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.697885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.698163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.698198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.698445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.698478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.698684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.698716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.698857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.698889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.699096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.699132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.699469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.699731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.699768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.700056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.700091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.700293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.700327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.700550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.700823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.700859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.701137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.701169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.701458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.701493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.701773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.701808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.356 [2024-11-20 10:06:12.702089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.356 [2024-11-20 10:06:12.702120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.356 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.702325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.702369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.702660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.702853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.702885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.703167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.703217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.703491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.703833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.704033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.704066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.704316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.704352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.704474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.704507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.704811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.704842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.705039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.705075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.705263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.705550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.705844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.705879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.706152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.706185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.706479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.706806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.706841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.707038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.707070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.707377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.707412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.707700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.707735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.708032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.708064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.708344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.708603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.708639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.709192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.709245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.709539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.709862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.709894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.710200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.710251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.710568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.710774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.357 [2024-11-20 10:06:12.710807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.357 qpair failed and we were unable to recover it. 00:27:39.357 [2024-11-20 10:06:12.711017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.711254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.711288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.711509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.711765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.711796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.712088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.712124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.712464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.712751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.712787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.713061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.713093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.713352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.713389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.713669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.713985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.714016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.714152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.714183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.714402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.714434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.714731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.714764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.715038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.715069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.715361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.715394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.715675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.715707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.716030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.716059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.716300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.716335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.716589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.716620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.716875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.716907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.717092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.717130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.717396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.717429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.717630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.717661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.717864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.717895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.718090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.718123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.718402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.718435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.718720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.718752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.719078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.719110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.719385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.719420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.719706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.719747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.720022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.720062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.720338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.720372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.720631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.720672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.720857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.720889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.721172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.358 [2024-11-20 10:06:12.721216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.358 qpair failed and we were unable to recover it. 00:27:39.358 [2024-11-20 10:06:12.721505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.721538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.721856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.721890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.722196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.722241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.722431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.722475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.722772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.722804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.723023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.723056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.723253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.723294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.723596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.723629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.723912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.723944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.724184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.724256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.724517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.724550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.724815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.724847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.725103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.725138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.725724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.725963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.726229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.726262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.726472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.726506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.726787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.726821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.727122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.727155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.727424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.727458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.727754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.727789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.728076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.728108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.728424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.728715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.729016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.729050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.729246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.729283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.729554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.729588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.729816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.729848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.729997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.730033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.730319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.730354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.730635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.730668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.730948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.730984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.731271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.359 [2024-11-20 10:06:12.731306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.359 qpair failed and we were unable to recover it. 00:27:39.359 [2024-11-20 10:06:12.731581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.731617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.731908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.731948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.732253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.732496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.732537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.733106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.733366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.733404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.733708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.733740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.733962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.733995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.734179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.734539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.734572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.734793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.734825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.735095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.735130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.735363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.735398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.735700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.735733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.735970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.736005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.736222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.736256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.736549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.736809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.736844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.737054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.737087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.737242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.737277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.737557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.737592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.737828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.737862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.738012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.738045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.738253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.738288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.738548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.738583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.738769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.738802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.739082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.739114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.739412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.739638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.739671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.739920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.739953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.740242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.740280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.740552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.740585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.740863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.740894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.741189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.741248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.741549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.741583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.741863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.741901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.360 qpair failed and we were unable to recover it. 00:27:39.360 [2024-11-20 10:06:12.742185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.360 [2024-11-20 10:06:12.742235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.742442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.742476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.742820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.743099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.743132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.743335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.743380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.743685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.743953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.744241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.744277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.744582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.744618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.744805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.744839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.745102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.745136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.745415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.745736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.745768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.746086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.746368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.746406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.746543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.746576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.746829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.746861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.747052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.747087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.747374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.747409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.747596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.747628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.747903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.747938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.748235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.748271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.748534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.748567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.748840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.749076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.749109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.749362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.749397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.749679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.749713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.750014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.750047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.750312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.750347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.750568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.750602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.750912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.750946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.751342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.751427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.751617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.751655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.751867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.361 [2024-11-20 10:06:12.751900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.361 qpair failed and we were unable to recover it. 00:27:39.361 [2024-11-20 10:06:12.752154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.752188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.752442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.752733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.753014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.753057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.753372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.753424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.753684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.753730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.754118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.754436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.754490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.754806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.754857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.755168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.755237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.755535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.755604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.755859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.756195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.756262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.756450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.756499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.756755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.756804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.757110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.757162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.757442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.757480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.757630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.757664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.757950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.757983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.758192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.758237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.758525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.758557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.758848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.758881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.759167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.759229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.759521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.759568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.759839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.759887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.760118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.760165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.760494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.760544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.760793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.760837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.761006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.761042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.362 [2024-11-20 10:06:12.761239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.362 [2024-11-20 10:06:12.761270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.362 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.761546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.761577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.761866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.762089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.762121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.762365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.762634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.762664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.762853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.762887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.763104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.763148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.763433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.763509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.763858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.763937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.764225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.764264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.764524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.764556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.764764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.764795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.765063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.765096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.765246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.765281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.765536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.765570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.765798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.765831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.766029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.766062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.766268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.766303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.766505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.766537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.766760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.766794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.767071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.767113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.767319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.767353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.767610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.767643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.767920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.767953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.768237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.768272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.768521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.768553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.768837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.768870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.769074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.769107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.769385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.769421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.769626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.769658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.769859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.363 [2024-11-20 10:06:12.769892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.363 qpair failed and we were unable to recover it. 00:27:39.363 [2024-11-20 10:06:12.770171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.770213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.770473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.770506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.770641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.770675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.770937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.770970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.771226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.771260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.771482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.771515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.771763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.771796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.772078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.772112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.772319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.772353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.772537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.772571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.772825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.772858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.773072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.773106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.773373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.773407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.773631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.773664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.773856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.773889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.774114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.774147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.774429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.774465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.774745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.774778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.775033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.775339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.775374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.775655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.775687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.775885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.775918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.776110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.776143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.776411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.776445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.776600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.776633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.776819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.776852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.777112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.777145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.777364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.777398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.777611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.777644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.777857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.777896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.778086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.778118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.778379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.778415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.778635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.778668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.778965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.778997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.779219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.364 [2024-11-20 10:06:12.779454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.364 [2024-11-20 10:06:12.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.364 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.779669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.779703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.779906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.779939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.780122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.780156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.780383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.780694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.780727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.781012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.781046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.781335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.781370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.781647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.781681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.781883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.781918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.782180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.782514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.782548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.782681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.782715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.782993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.783028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.783233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.783268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.783501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.783534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.783814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.783847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.784048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.784081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.784308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.784344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.784544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.784577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.784882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.784916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.785280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.785360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.785596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.785635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.785940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.785974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.786228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.786268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.786543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.786581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.786764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.786797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.786998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.787032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.787239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.787276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.787489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.787522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.787722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.787756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.787971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.788005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.788230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.788268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.788400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.788433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.788712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.365 [2024-11-20 10:06:12.788757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.365 qpair failed and we were unable to recover it. 00:27:39.365 [2024-11-20 10:06:12.788904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.788940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.789200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.789254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.789391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.789425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.789659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.789700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.789912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.789948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.790141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.790175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.790397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.790431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.790736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.790771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.791052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.791086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.791223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.791259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.791469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.791504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.791642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.791675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.791875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.791909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.792146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.792184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.792418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.792453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.792737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.792940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.792973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.793178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.793229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.793359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.793393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.793682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.793716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.794037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.794178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.794228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.794455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.794489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.794743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.794782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.794985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.795019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.795231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.795267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.795408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.795442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.795673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.795717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.795928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.795963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.796182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.796404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.796438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.796699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.796735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.796954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.796988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.797186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.366 [2024-11-20 10:06:12.797234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.366 qpair failed and we were unable to recover it. 00:27:39.366 [2024-11-20 10:06:12.797490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.797525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.797712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.797746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.797947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.797981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.798231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.798268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.798427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.798462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.798671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.798710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.798971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.799003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.799130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.799165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.799466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.799501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.799711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.799744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.800010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.800046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.800308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.800342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.800491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.800523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.800826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.800860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.801065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.801096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.801391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.801426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.801707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.801744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.801944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.801978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.802123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.802156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.802430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.802466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.802735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.802770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.802973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.803004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.803149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.803182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.803530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.803805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.803837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.803991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.804024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.804231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.367 [2024-11-20 10:06:12.804277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.367 qpair failed and we were unable to recover it. 00:27:39.367 [2024-11-20 10:06:12.804565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.804599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.804786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.804819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.805128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.805326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.805474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.805627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.805788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.805983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.806017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.806312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.806348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.806550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.806582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.806785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.806823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.807043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.807077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.807374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.807408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.807653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.807686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.807921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.807957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.808159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.808190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.808454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.808488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.808772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.808807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.809088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.809121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.809400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.809437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.809684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.809718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.809928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.809960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.810174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.810248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.810555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.810588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.810843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.810876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.811172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.811222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.811500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.811533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.811803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.811837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.812139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.812175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.812472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.812506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.812769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-11-20 10:06:12.812804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.368 qpair failed and we were unable to recover it. 00:27:39.368 [2024-11-20 10:06:12.813074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.813107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.813402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.813438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.813638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.813672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.813932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.813965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.814103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.814136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.814418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.814458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.814729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.814957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.814990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.815259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.815297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.815576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.815609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.815855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.815888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.816090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.816125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.816402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.816437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.816743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.816786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.817095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.817136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.817325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.817617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.817657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.817863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.817896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.818089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.818121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.818317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.818352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.818599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.818635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.818890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.818923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.819229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.819263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.819482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.819517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.819705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.819737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.819991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.820023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.820312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.820348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.820628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.820661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.820943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.820978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.821289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.821324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.821512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.821544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.821808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.821843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.369 [2024-11-20 10:06:12.822129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-11-20 10:06:12.822164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.369 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.822394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.822429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.822635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.822667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.822989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.823023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.823231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.823266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.823539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.823574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.823725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.823759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.823963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.823995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.824287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.824321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.824598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.824633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.824909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.824942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.825231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.825268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.825543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.825794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.825829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.826134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.826169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.826384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.826420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.826625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.826657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.826857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.826889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.827098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.827133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.827268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.827303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.827496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.827532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.827756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.827789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.828076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.828328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.828363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.828647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.828682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.828848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.828883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.829092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.829426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.829462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.829665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.829700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.829983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.830017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.830325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.830359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.830565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.830601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.830861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.830893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.831109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.831142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.370 [2024-11-20 10:06:12.831428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-11-20 10:06:12.831464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.370 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.831741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.831773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.832090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.832283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.832320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.832519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.832551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.832775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.832807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.833011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.833043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.833323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.833625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.833658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.833936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.833974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.834301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.834334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.834589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.834622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.834860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.834900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.835153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.835186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.835419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.835453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.835660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.836002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.836037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.836313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.836348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.836631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.836667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.836977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.837010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.837228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.837262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.837508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.837741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.837773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.837969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.838001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.838284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.838321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.838527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.838560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.838862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.838895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.839105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.839139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.839437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.839485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.839695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.839729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.839985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.840019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.840281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.371 [2024-11-20 10:06:12.840317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.371 qpair failed and we were unable to recover it. 00:27:39.371 [2024-11-20 10:06:12.840618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.840653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.840903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.840937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.841074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.841106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.841387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.841422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.841559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.841603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.841831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.841864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.842164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.842195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.842524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.842559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.842763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.842796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.843013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.843046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.843335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.843371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.843648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.843681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.843911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.843944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.844197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.844250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.844514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.844547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.844764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.844796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.845072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.845108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.845325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.845375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.845577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.845610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.845806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.845846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.846163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.846198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.846425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.846458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.846709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.847021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.847055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.847238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.847538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.847578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.847858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.847902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.848137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.848177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.848492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.848528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.848757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.848789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.849071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.849103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.849289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.849324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.849485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 [2024-11-20 10:06:12.849736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.372 [2024-11-20 10:06:12.849769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.372 qpair failed and we were unable to recover it. 00:27:39.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2815553 Killed "${NVMF_APP[@]}" "$@" 00:27:39.372 [2024-11-20 10:06:12.850073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.850106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.850352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.850386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.850653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.851005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.851037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:39.373 [2024-11-20 10:06:12.851315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.851350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.373 [2024-11-20 10:06:12.851634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.851666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.373 [2024-11-20 10:06:12.851952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.851986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.852246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.852281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.852504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.852536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.852800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.852833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.853118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.853151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.853373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.853407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.853662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.853693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.853935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.853968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.854151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.854402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.854435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.854692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.854724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.854852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.854885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.855149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.855181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.855337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.855371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.855673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.855706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.855972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.856006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.856305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.856340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.856576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.856608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.856841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.373 [2024-11-20 10:06:12.857041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.373 [2024-11-20 10:06:12.857073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.373 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.857347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.857388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.857594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.857626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.857904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.857937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.858142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.858175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.858407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.858441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.858648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.858682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.858905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.858937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.859194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.859242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.859411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.859447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2816271 00:27:39.374 [2024-11-20 10:06:12.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.859777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2816271 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:39.374 [2024-11-20 10:06:12.859992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.860029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.860231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.860268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2816271 ']' 00:27:39.374 [2024-11-20 10:06:12.860575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.860613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b9 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.374 0 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.860867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.860903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.374 [2024-11-20 10:06:12.861218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.861258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.374 [2024-11-20 10:06:12.861468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.861503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.374 [2024-11-20 10:06:12.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.861735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 10:06:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 [2024-11-20 10:06:12.861946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.861981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.862233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.862268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.862527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.862559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.862763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.862796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.863041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.863073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.863231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.863273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.863472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.863505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.863788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.863820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.863975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.864017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.374 qpair failed and we were unable to recover it. 00:27:39.374 [2024-11-20 10:06:12.864229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.374 [2024-11-20 10:06:12.864266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.864469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.864502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.864701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.864733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.864966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.865003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.865240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.865274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.865553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.865587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.865865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.865902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.866178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.866446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.866481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.866667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.866699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.866892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.866925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.867071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.867105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.867374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.867410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.867664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.867698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.867901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.867934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.868122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.868154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.868448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.868483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.868685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.868993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.869025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.869221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.869259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.869534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.869566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.869862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.869896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.870176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.870232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.870448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.870481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.870689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.870722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.871014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.871046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.871252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.871286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.871416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.871447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.871667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.871700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.871981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.872014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.375 [2024-11-20 10:06:12.872302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.375 qpair failed and we were unable to recover it. 00:27:39.375 [2024-11-20 10:06:12.872579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.872611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.872861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.872895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.873042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.873075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.873334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.873542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.873574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.873776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.873816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.874014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.874046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.874184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.874226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.874423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.874453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.874780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.874813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.875011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.875045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.875353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.875386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.875533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.875565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.875749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.875783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.875980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.876011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.876224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.876257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.876390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.876423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.876583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.876614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.876818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.876854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.877071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.877105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.877387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.877421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.877620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.877652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.877763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.877794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.877986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.878018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.878243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.878484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.878516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.878798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.878831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.879099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.879369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.879402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.879538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.879572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.879777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.879809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.376 [2024-11-20 10:06:12.880089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.376 [2024-11-20 10:06:12.880123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.376 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.880265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.880301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.880554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.880586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.880842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.880875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.881081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.881113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.881304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.881338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.881593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.881624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.881940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.881973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.882112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.882144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.882364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.882398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.882652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.882685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.882979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.883013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.883296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.883328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.883584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.883617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.883920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.883958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.884232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.884265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.884414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.884446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.884700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.884733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.885010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.885042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.885289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.885474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.885505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.885694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.885727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.885994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.886026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.886237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.886272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.886471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.886504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.886695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.886729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.887015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.887219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.887253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.887469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.887502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.887781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.887814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.887959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.887991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.888159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.888453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.888667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.377 [2024-11-20 10:06:12.888700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.377 qpair failed and we were unable to recover it. 00:27:39.377 [2024-11-20 10:06:12.888962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.888995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.889252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.889288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.889412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.889444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.889698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.889730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.889936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.890195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.890500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.890533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.890754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.890786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.891015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.891047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.891311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.891346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.891479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.891638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.891669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.891864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.891905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.892115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.892148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.892344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.892376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.892507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.892539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.892797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.892827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.893034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.893065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.893558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.893589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.893727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.893765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.893878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.893911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.894106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.894140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.894418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.894452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.894579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.894611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.894810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.378 qpair failed and we were unable to recover it. 00:27:39.378 [2024-11-20 10:06:12.895045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.378 [2024-11-20 10:06:12.895077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.895308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.895342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.895473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.895504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.895690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.895725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.895907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.895938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.896195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.896239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.896512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.896544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.896697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.896729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.896932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.896963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.897097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.897127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.897385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.897420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.897635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.897667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.897860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.897893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.898090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.898121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.898322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.898355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.898583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.898615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.898891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.898926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.899064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.899096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.899346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.899379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.899659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.899690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.899804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.899836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.899987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.900018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.900273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.900308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.900514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.900545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.900819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.901050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.901082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.901291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.901325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.901603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.901636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.901838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.901870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.902008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.902233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.902267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.902389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.379 [2024-11-20 10:06:12.902419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.379 qpair failed and we were unable to recover it. 00:27:39.379 [2024-11-20 10:06:12.902605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.902637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.902830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.902862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.903000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.903039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.903249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.903284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.903506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.903537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.903748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.903781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.904034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.904066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.904317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.904583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.904615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.904738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.904769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.904972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.905003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.905198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.905241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.905421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.905453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.905647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.905680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.905885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.905916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.906101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.906134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.906334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.906368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.906564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.906595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.906895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.906927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.907112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.907144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.907261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.907294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.907493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.907526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.907810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.907842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.908021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.908053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.908247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.908279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.908480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.908511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.908706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.908736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.908983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.909015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.909223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.909262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.909489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.909650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.909681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.380 qpair failed and we were unable to recover it. 00:27:39.380 [2024-11-20 10:06:12.909863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.380 [2024-11-20 10:06:12.909896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.910043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.910076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.910258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.910290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.910504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.910536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.910724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.910755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.910971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.911002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.911186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.911240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.911442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.911474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.911672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.911705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.911851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.911884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.912103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.912136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 [2024-11-20 10:06:12.912141] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.912208] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.381 [2024-11-20 10:06:12.912345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.912389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.912605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.912637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.912825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.912856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.913058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.913089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.913278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.913499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.913542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.913805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.913838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.914053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.914086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.914277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.914312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.914515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.914552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.914788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.915019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.915054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.915183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.915226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.915429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.915466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.915751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.915786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.916061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.916096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.916232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.916266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.916519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.916555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.916820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.916857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.381 [2024-11-20 10:06:12.917069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.381 [2024-11-20 10:06:12.917102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.381 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.917320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.917354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.917539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.917571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.917818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.917849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.918043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.918076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.918261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.918291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.918426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.918457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.918742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.918777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.918966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.919002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.919133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.919172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.919466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.919507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.919733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.919930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.919970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.920119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.920153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.920365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.920402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.920626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.920660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.920793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.920982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.921022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.921154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.921189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.921440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.921708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.921758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.660 [2024-11-20 10:06:12.921983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.660 [2024-11-20 10:06:12.922016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.660 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.922246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.922281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.922522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.922650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.922682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.922880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.923115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.923148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.923341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.923375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.923555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.923592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.923726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.923760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.923906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.923938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.924150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.924183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.924391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.924424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.924544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.924578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.924843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.924875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.925019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.925051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.925173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.925211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.925541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.925730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.925763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.926027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.926060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.926249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.926284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.926567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.926600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.926853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.926886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.927135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.927167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.927390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.927424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.927603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.927759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.927797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.928014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.928047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.928318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.928352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.928556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.928587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.928739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.928774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.661 [2024-11-20 10:06:12.928969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.661 [2024-11-20 10:06:12.929000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.661 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.929222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.929255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.929396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.929428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.929630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.929662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.929812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.929845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.930899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.930941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.931220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.931254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.931437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.931470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.931686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.931718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.931905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.931939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.932173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.932213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.932430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.932464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.932600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.932631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.932852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.932884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.932993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.933023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.933225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.933264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.933562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.933594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.933712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.933746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.933869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.933900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.934086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.934119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.934314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.934347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.934550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.934582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.934735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.662 [2024-11-20 10:06:12.934934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.662 [2024-11-20 10:06:12.934965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.662 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.935169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.935210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.935402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.935554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.935588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.935715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.935746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.935942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.935973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.936094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.936143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.936354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.936389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.936670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.936704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.936944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.937233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.937267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.937513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.937546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.937743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.937775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.937977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.938010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.938301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.938335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.938518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.938553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.938798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.938830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.939033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.939073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.939221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.939256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.939502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.939534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.939751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.939784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.940038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.940076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.940314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.940348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.940468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.940500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.940705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.940739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.940926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.940959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.941156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.941189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.941484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.941517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.941701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.941733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.663 [2024-11-20 10:06:12.941943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.663 [2024-11-20 10:06:12.941975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.663 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.942167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.942199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.942431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.942462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.942639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.942673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.942866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.942898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.943175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.943216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.943374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.943406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.943582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.943615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.943731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.943762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.943949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.943981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.944193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.944235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.944486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.944517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.944710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.944743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.944885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.944917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.945192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.945235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.945482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.945513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.945636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.945669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.945856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.945886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.946150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.946182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.946380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.946412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.946607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.946637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.946773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.946804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.946981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.947012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.947144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.947175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.947371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.947402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.947591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.947622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.947815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.947847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.948043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.948074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.948294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.948328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.948456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.948488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.948675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.948706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.948893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.948925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.949214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.949254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.949508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.949540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.949765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.664 [2024-11-20 10:06:12.949798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.664 qpair failed and we were unable to recover it. 00:27:39.664 [2024-11-20 10:06:12.949924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.949956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.950160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.950192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.950493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.950646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.950685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.950896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.951102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.951134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.951326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.951359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.951540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.951761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.951793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.951897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.951928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.952211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.952245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.952368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.952402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.952594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.952625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.952897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.952929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.953212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.953245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.953454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.953590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.953796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.953827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.954011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.954042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.954237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.954270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.954450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.954482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.954695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.954727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.954978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.955009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.955187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.955232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.955471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.955503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.955691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.955722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.665 [2024-11-20 10:06:12.955908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.665 [2024-11-20 10:06:12.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.665 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.956221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.956253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.956481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.956743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.956775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.956895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.957196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.957236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.957518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.957550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.957736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.957767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.957967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.957998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.958244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.958278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.958487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.958519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.958713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.958750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.958896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.959172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.959212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.959401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.959433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.959567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.959598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.959867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.959899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.960091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.960122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.960303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.960336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.960577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.960608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.960725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.960758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.960889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.960919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.961136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.961169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.961391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.961627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.961659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.961792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.961825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.962040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.962075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.962377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.962413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.962657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.962690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.666 qpair failed and we were unable to recover it. 00:27:39.666 [2024-11-20 10:06:12.962878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.666 [2024-11-20 10:06:12.962910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.963098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.963138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.963333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.963376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.963571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.963603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.963823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.963854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.964136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.964370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.964402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.964661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.964693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.964829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.964860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.965116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.965148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.965418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.965451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.965584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.965614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.965818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.966081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.966113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.966249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.966282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.966471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.966504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.966628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.966669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.966911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.966942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.967050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.967082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.967363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.967547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.967578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.967747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.967779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.967945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.967981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.968104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.968136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.968409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.968442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.968614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.968646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.968842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.968873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.969115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.969148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.969298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.969330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.969522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.969552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.969740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.969770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.969949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.969979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.970116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.970361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.970394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.667 qpair failed and we were unable to recover it. 00:27:39.667 [2024-11-20 10:06:12.970613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.667 [2024-11-20 10:06:12.970644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.970924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.971063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.971095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.971274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.971306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.971552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.971582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.971756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.971787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.972022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.972054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.972232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.972265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.972510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.972542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.972666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.972700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.972899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.972930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.973054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.973085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.973330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.973551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.973583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.973776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.973809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.973923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.973954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.974134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.974165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.974382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.974414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.974680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.974712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.974909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.974939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.975145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.975176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.975382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.975413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.975686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.975926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.975958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.976135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.976184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.976435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.976467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.976644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.976675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.976947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.976979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.977090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.977127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.668 [2024-11-20 10:06:12.977262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.668 [2024-11-20 10:06:12.977295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.668 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.977481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.977513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.977781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.977813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.977935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.977966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.978183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.978373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.978403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.978588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.978619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.978800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.978831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.979071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.979103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.979293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.979327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.979574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.979605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.979710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.979742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.979991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.980023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.980139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.980170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.980363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.980396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.980604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.980636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.980811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.980842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.981016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.981048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.981222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.981255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.981446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.981477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.981610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.981639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.981879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.981912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.982176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.982243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.982437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.982469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.982665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.982697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.982897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.982933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.982987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf14af0 (9): Bad file descriptor 00:27:39.669 [2024-11-20 10:06:12.983214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.983284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.983509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.983545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.983807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.983840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.984058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.984088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.984276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.669 [2024-11-20 10:06:12.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.669 qpair failed and we were unable to recover it. 00:27:39.669 [2024-11-20 10:06:12.984500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.984531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.984725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.984758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.984972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.985121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.985294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.985575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.985734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.985948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.985978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.986291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.986364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.986510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.986546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.986822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.986855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.987034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.987067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.987275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.987434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.987467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.987646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.987678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.987871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.987903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.988157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.988189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.988392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.988434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.988604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.988637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.988761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.988792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.988969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.989002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.989188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.989239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.989430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.989462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.989597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.989629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.989841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.989873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.990060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.990092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.990225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.990259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.990439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.990470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.990658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.990690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.990863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.990894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.991077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.991109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.991294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.991328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.991511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.670 [2024-11-20 10:06:12.991542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.670 qpair failed and we were unable to recover it. 00:27:39.670 [2024-11-20 10:06:12.991735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.991767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.991894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.991926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.992119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.992152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.992335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.992368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.992613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.992645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.992834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.992867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.993086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.993302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.993467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.993625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.993778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.993995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.994028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.994223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.994258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.994502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.994535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.994679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.994712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.994913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.995066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.995097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.995243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.995278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.995460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.995492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.995768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.995801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.995992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.996226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.996259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.996471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.996504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.996771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.996803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.996985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.997017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.671 [2024-11-20 10:06:12.997253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.671 qpair failed and we were unable to recover it. 00:27:39.671 [2024-11-20 10:06:12.997521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.997563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.997771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.997803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.998042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.998075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.998269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.998305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.998477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.998509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.998643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.998676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.998919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.998953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.999067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.999079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.672 [2024-11-20 10:06:12.999105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.999330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.999364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.999609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.999643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:12.999782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:12.999815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.000069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.000102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.000277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.000311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.000446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.000478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.000651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.000684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.000975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.001007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.001220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.001254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.001397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.001430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.001616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.001647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.001847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.001878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.002118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.002150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.002364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.002398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.002571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.002602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.002871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.002904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.003044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.003076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.003194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.003242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.003371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.003404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.003594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.003632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.003821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.003853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.004033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.672 [2024-11-20 10:06:13.004067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.672 qpair failed and we were unable to recover it. 00:27:39.672 [2024-11-20 10:06:13.004254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.004288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.004501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.004534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.004774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.004806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.005060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.005093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.005377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.005412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.005600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.005634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.005778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.005809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.006008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.006041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.006296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.006333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.006527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.006559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.006760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.006792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.006922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.006959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.007448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.007481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.007670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.007705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.007889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.007921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.008040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.008071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.008273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.008309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.008503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.008537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.008643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.008674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.008845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.008877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.009008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.009042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.009254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.009287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.009482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.009515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.009694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.009727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.009950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.009984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.010139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.010184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.010327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.010367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.673 [2024-11-20 10:06:13.010486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.673 [2024-11-20 10:06:13.010520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.673 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.010712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.010746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.010888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.010921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.011132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.011165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.011303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.011337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.011450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.011483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.011721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.011753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.011955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.011988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.012178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.012221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.012351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.012383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.012571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.012604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.012812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.012852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.012989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.013020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.013223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.013256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.013580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.013805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.013837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.013964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.013996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.014103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.014135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.014410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.014445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.014635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.014673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.014869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.014901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.015092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.015393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.015691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.015722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.015841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.015874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.016096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.016130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.016320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.016353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.016465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.016497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.016670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.016720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.016922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.016956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.017127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.674 [2024-11-20 10:06:13.017159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.674 qpair failed and we were unable to recover it. 00:27:39.674 [2024-11-20 10:06:13.017445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.017479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.017613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.017646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.017925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.017958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.018158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.018190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.018332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.018365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.018603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.018635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.018887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.018919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.019165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.019216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.019488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.019521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.019638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.019678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.019807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.019842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.020027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.020059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.020231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.020266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.020533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.020566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.020758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.020790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.020919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.020951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.021173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.021214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.021410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.021443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.021703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.021735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.022004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.022037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.022296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.022331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.022583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.022616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.022803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.022842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.023111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.023143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.023439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.023735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.023768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.024039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.024071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.024357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.024393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.024586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.024618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.024803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.024843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.025083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.025114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.025242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.025276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.025461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.675 [2024-11-20 10:06:13.025493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.675 qpair failed and we were unable to recover it. 00:27:39.675 [2024-11-20 10:06:13.025671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.025704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.025959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.025992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.026257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.026291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.026422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.026454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.026655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.026687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.026803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.026845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.027108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.027140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.027383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.027417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.027606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.027638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.027832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.027865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.028047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.028079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.028226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.028260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.028396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.028429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.028670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.028703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.028904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.028942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.029133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.029166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.029429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.029463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.029685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.029718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.029905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.029938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.030069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.030275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.030308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.030448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.030480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.030671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.030704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.030967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.030998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.031245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.031280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.031415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.031448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.031565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.031598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.031839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.031871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.676 [2024-11-20 10:06:13.032145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.676 [2024-11-20 10:06:13.032178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.676 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.032361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.032662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.032868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.032902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.033109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.033141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.033399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.033432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.033618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.033651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.033844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.033877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.034064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.034097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.034226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.034259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.034431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.034470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.034699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.034731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.034919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.034951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.035131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.035164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.035385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.035556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.035587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.035774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.035807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.036035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.036200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.036367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.036584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.036855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.036987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.037020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.037152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.037188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.037330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.037364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.037554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.037590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.037783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.677 [2024-11-20 10:06:13.037823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.677 qpair failed and we were unable to recover it. 00:27:39.677 [2024-11-20 10:06:13.038027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.038059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.038238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.038273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.038462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.038495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.038740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.038776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.038904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.038939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.039052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.039085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.039290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.039326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.039462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.039496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.039777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.039810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.039999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.040032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.040044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.678 [2024-11-20 10:06:13.040077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.678 [2024-11-20 10:06:13.040084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.678 [2024-11-20 10:06:13.040090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.678 [2024-11-20 10:06:13.040095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.678 [2024-11-20 10:06:13.040219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.040395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.040428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.040667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.040702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.040821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.040853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.041059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.041281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.041556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.041704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.041723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:39.678 [2024-11-20 10:06:13.041849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.041831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:39.678 [2024-11-20 10:06:13.041936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:39.678 [2024-11-20 10:06:13.042025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.042057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 [2024-11-20 10:06:13.041937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.042184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.042226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.042410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.042695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.042729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.042925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.042960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.043139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.043174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.043470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.043547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.043774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.043941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.678 [2024-11-20 10:06:13.043977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.678 qpair failed and we were unable to recover it. 00:27:39.678 [2024-11-20 10:06:13.044115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.044148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.044356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.044390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.044610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.044644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.044769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.044802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.044932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.044968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.045160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.045194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.045385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.045593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.045626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.045741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.045782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.045911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.046128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.046161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.046307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.046342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.046527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.046562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.046767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.046801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.047049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.047083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.047326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.047362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.047480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.047636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.047669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.047793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.047828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.048007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.048042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.048236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.048270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.048462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.048495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.048751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.048785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.048984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.049016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.049189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.049234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.049420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.049453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.049571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.049606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.049845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.049879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.050135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.050395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.050430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.050641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.050952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.051136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.051170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.051393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.051429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.051612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.051646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.051839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.051880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.052060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.052093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.679 [2024-11-20 10:06:13.052293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.679 [2024-11-20 10:06:13.052329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.679 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.052489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.052618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.052651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.052844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.052879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.053085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.053119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.053364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.053400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.053584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.053752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.053789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.053914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.053946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.054147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.054182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.054373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.054407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.054594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.054630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.054852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.054899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.055033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.055068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.055192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.055239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.055504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.055539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.055809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.055843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.056030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.056065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.056262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.056298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.056480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.056514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.056729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.056764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.056886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.056919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.057055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.057090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.057211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.057246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.057367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.057402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.057690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.057734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.057916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.057949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.058083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.058116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.058222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.058256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.058574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.058610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.058801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.058836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.059042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.059078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.059220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.059256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.059456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.680 [2024-11-20 10:06:13.059501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.680 qpair failed and we were unable to recover it. 00:27:39.680 [2024-11-20 10:06:13.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.059700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.060013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.060065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.060342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.060716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.060772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.061003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.061053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.061414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.061658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.061711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.061990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.062039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.062259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.062311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.062554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.062764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.062812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.063022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.063070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.063365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.063414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.063638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.063692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.063904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.063941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.064130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.064164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.064444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.064481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.064610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.064753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.064795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.064984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.065019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.065226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.065270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.065580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.065631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.065901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.065951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.066271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.066327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.066635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.066686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.066942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.066991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.067293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.067349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.067488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.067523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.067650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.067685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.067893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.067927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.068223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.068260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.068377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.068410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.068642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.068975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.069027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.069247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.069295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.069543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.069590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.069829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.069878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.070116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.681 [2024-11-20 10:06:13.070164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.681 qpair failed and we were unable to recover it. 00:27:39.681 [2024-11-20 10:06:13.070399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.070446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.070750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.070797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.070960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.071011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.071220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.071256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.071501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.071536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.071711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.071745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.071952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.071985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.072249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.072285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.072430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.072465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.072735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.072785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.072995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.073042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.073200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.073261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.073470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.073517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.073744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.073795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.074093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.074141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.074444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.074494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.074690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.074742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.075017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.075052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.075178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.075224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.075358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.075392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.075595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.075630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.075898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.075941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.076161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.076195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.076358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.076565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.076612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.076883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.076929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.077248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.077300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.077612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.077663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.077922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.077970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.078250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.078303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.078544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.078583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.682 [2024-11-20 10:06:13.078882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.682 qpair failed and we were unable to recover it. 00:27:39.682 [2024-11-20 10:06:13.079160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.079192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.079472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.079505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.079771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.079806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.080024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.080228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.080277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.080423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.080471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.080739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.080786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.081090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.081433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.081708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.081759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.082060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.082349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.082649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.082683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.082940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.082976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.083221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.083258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.083562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.083612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.083785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.083843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.084200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.084489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.084810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.084857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.085150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.085197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.085432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.085483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.085773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.085815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.086072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.086108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.086373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.086411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.086679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.086714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.086830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.086864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.087049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.087088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.087235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.087283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.087512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.087562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.087814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.087862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.088078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.088126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.088293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.088344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.088564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.683 [2024-11-20 10:06:13.088612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.683 qpair failed and we were unable to recover it. 00:27:39.683 [2024-11-20 10:06:13.088828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.088878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.089241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.089398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.089437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.089678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.089712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.089985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.090117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.090152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.090313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.090350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.090525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.090562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.090805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.090851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.091178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.091243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.091776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.091825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.092038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.092089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.092385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.092436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.092718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.092769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.093067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.093107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.093411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.093688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.093723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.093953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.094136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.094171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.094427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.094722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.094757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.095032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.095067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.095313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.095374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.095660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.095708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.095981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.096028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.096265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.096313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.096562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.096597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.096780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.096814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.097107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.097140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.097404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.097439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.097729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.097766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.097929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.098250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.098299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.098646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.098693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf06ba0 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.098920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.098960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.099165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.099199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.099451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.684 [2024-11-20 10:06:13.099486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.684 qpair failed and we were unable to recover it. 00:27:39.684 [2024-11-20 10:06:13.099728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.099762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.100052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.100086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.100376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.100410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.100618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.100652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.100837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.100871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.101061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.101094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.101298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.101332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.101530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.101764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.101797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.102036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.102254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.102290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.102575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.102608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.102820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.102854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.103140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.103174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.103427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.103462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.103709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.103919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.103952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.104223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.104257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.104376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.104410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.104630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.104662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.104782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.104816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.104926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.104960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.105141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.105174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.105368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.105402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.105605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.105638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.105770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.105810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.105997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.106031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.106224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.106259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.106383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.106416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.106617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.106651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.106866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.106902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.107030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.107063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.107276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.685 [2024-11-20 10:06:13.107310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.685 qpair failed and we were unable to recover it. 00:27:39.685 [2024-11-20 10:06:13.107433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.107468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.107698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.107732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.107970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.108005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.108176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.108224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.108436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.108470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.108675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.108709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.108942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.108975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.109150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.109415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.109448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.109633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.109667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.109801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.109835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.110041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.110269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.110431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.110806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.110984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.111017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.111261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.111296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.111423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.111456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.111660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.111958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.111994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.112199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.112264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.112482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.112531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.112736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.112784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.112933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.112981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.113217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.113269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.113496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.113544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.113763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.113812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.114077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.114115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.114248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.114283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.114529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.114563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.114749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.114783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.114957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.114996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.115177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.115218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.115414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.115448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.115569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.115603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.115817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.116031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.686 [2024-11-20 10:06:13.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.686 qpair failed and we were unable to recover it. 00:27:39.686 [2024-11-20 10:06:13.116257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.116293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.116489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.116521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.116708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.116741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.116942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.116975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.117101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.117135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.117266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.117299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.117461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.117494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.117727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.117760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.117951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.118105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.118138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.118428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.118462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.118650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.118683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.118926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.118960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.119222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.119256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.119445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.119478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.119672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.119705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.119892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.119926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.120142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.120268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.120304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.120436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.120470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.120656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.120689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.120862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.120919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.121187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.121239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.121406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.121443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.121607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.121895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.122054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.122089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.122338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.122375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.122493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.122524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.122716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.122749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.122925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.122970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.123145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.123178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.687 qpair failed and we were unable to recover it. 00:27:39.687 [2024-11-20 10:06:13.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.687 [2024-11-20 10:06:13.123516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.123789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.123832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.123954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.123995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.124177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.124374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.124407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.124723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.124964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.124997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.125165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.125361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.125396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.125615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.125650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.125912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.125944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.126154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.126186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.126399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.126433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.126672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.126704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.126839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.126871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.127943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.127977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.128189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.128235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.128415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.128448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.128655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.128695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.128977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.129011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.129315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.129350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.129603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.129640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.129831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.129864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.130149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.130192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.130433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.130470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.130713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.130886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.130920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.131110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.131147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.131328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.131542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.688 qpair failed and we were unable to recover it. 00:27:39.688 [2024-11-20 10:06:13.131767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.688 [2024-11-20 10:06:13.131807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.131992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.132028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.132226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.132261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.132506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.132539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.132809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.132845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.133078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.133113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.133225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.133258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.133451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.133493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.133686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.133720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.133831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.133860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.134129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.134166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.134333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.134368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.134571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.134605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.134802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.134835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.134971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.135141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.135308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.135515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.135742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.135952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.135988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.136173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.136222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.136367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.136400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.136610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.136650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.136902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.137131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.137164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.137388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.137423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.137558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.137593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.137705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.138007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.138041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.689 [2024-11-20 10:06:13.138285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.689 [2024-11-20 10:06:13.138322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.689 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.138475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.138510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.138651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.138684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.138853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.138886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.139171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.139446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.139524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b0000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.139750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.139786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.139910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.139944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.140070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.140104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.140371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.140405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.140600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.140634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.140821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.140855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.141129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.141162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.141412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.141447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.141665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.141863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.141896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.142090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.142124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.142381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.142418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.142644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.142893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.142926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.690 [2024-11-20 10:06:13.143063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.143099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.143305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.143340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:39.690 [2024-11-20 10:06:13.143518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.143553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.143739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.690 [2024-11-20 10:06:13.143774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.143985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.144020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.690 [2024-11-20 10:06:13.144147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.144182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.690 [2024-11-20 10:06:13.144450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.144484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.144652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.144685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.144975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.145008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.690 [2024-11-20 10:06:13.145228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.690 [2024-11-20 10:06:13.145263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.690 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.145389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.145670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.145704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.145828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.145861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.145975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.146009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.146222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.146258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.146520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.146557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.146682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.146716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.146902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.147946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.148066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.148100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.148216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.148252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.148466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.148500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.148681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.148715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.148906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.148939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.149139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.149177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.149377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.149411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.149552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.149588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.149801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.149835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.150006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.150040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.150228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.150264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.150580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.150614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.150807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.150846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.150960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.150993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.151185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.151229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.151416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.151449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.151711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.151744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.151920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.151954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.691 [2024-11-20 10:06:13.152092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.691 [2024-11-20 10:06:13.152124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.691 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.152248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.152282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.152457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.152492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.152710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.152744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.152932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.152971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.153108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.153143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.153477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.153513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.153700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.153736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.153934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.153970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.154086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.154120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.154317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.154353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.154561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.154594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.154783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.154818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.155938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.155972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.156218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.156253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.156439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.156472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.156602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.156644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.156839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.156877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.157006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.692 [2024-11-20 10:06:13.157039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.692 qpair failed and we were unable to recover it. 00:27:39.692 [2024-11-20 10:06:13.157235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.157301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.157495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.157659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.157694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.157881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.157914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.158104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.158137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.158347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.158386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.158501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.158535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.158659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.158939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.158972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.159149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.159189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.159434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.159479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.159672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.159706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.159844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.159878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.160136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.160174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.160387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.160421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.160602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.160635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.160813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.160995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.161030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.161240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.161276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.161501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.161627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.161662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.161835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.161868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.162064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.162097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.162237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.162273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.162504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.162726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.162759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.162881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.162915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.163020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.693 [2024-11-20 10:06:13.163053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.693 qpair failed and we were unable to recover it. 00:27:39.693 [2024-11-20 10:06:13.163186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.163236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.163478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.163512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.163689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.163722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.163844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.163877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.164005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.164050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.164304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.164339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.164534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.164567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.164742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.164787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.164923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.164956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.165079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.165117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.165359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.165394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.165524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.165569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.165703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.165741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.165867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.165901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.166910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.166943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.167998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2b4000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.168170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.168320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.168466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.694 [2024-11-20 10:06:13.168812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.694 qpair failed and we were unable to recover it. 00:27:39.694 [2024-11-20 10:06:13.168988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.169149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.169361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.169509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.169646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.169878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.169913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.170076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.170248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.170406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.170545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.170866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.170971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.171116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.171336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.171507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.171666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.171895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.171929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.172107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.172141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.172269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.172311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.172451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.172486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.172744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.172777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.172970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.173121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.173281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.173535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.173902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.173936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.174054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.174087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.174296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.174330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.174456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.174490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.695 [2024-11-20 10:06:13.174762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.695 [2024-11-20 10:06:13.174796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.695 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.174927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.174962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.175172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.175353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.175387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.175521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.175553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.175662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.175695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.175878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.175912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.176090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.176124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.176298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.176333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.176511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.176543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.176745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.176780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.176956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.176989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.696 [2024-11-20 10:06:13.177108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.177142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.177267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.177302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.177416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.177455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:39.696 [2024-11-20 10:06:13.177645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.177680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.177807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.177841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.178024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.696 [2024-11-20 10:06:13.178165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.178344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.178502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.178642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.178878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.178911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.696 qpair failed and we were unable to recover it. 00:27:39.696 [2024-11-20 10:06:13.179894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.696 [2024-11-20 10:06:13.179927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.180115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.180148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.180363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.180397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.180501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.180673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.180707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.180886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.181062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.181272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.181547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.181708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.181860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.181992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.182027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.182232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.182268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.182472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.182784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.182818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.182950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.182982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.183092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.183126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.183257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.183293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.183485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.183518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.183633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.183667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.183920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.183952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.184130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.184164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.184436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.184471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.184590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.184922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.185911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.697 [2024-11-20 10:06:13.185945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.697 qpair failed and we were unable to recover it. 00:27:39.697 [2024-11-20 10:06:13.186053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.186087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.186262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.186296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.186484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.186517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.186722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.186756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.186927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.186959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.187093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.187126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.187300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.187335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.187521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.187560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.187683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.187716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.187904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.188109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.188142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.188323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.188358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.188627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.188660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.188837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.188871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.188989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.189022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.189194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.189238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.189412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.189445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.189690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.189724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.189920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.189954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.190158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.190191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.190327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.190361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.190491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.190525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.190740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.190774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.190907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.190940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.698 [2024-11-20 10:06:13.191081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.698 [2024-11-20 10:06:13.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.698 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.191223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.191257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.191457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.191490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.191629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.191662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.191837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.191870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.192049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.192083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.192295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.192330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.192503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.192537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.192710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.192743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.192877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.192911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.193126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.193249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.193284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.193461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.193494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.193678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.193712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.193824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.193858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.194913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.194947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.195135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.195168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.195433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.195468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.195593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.195632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.195826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.195859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.195974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.196007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.196195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.196239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.196373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.196407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.196892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.197099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.197132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.197342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.197377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.699 [2024-11-20 10:06:13.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.699 [2024-11-20 10:06:13.197605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.699 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.197733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.197767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.198008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.198041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.198219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.198253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.198464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.198497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.198615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.198648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.198899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.199120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.199153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.199362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.199396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.199597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.199631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.199803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.199963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.199997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.200185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.200228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.200473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.200506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.200696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.200729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.200982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.201014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.201147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.201181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.201387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.201421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.201542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.201575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.201762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.201795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.201981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.202014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.202130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.202164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.202434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.202469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.202656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.202689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.202876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.202910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.203016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.203048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.203343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.203378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.203555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.203588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.203695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.203729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.203910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.203944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.204061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.204095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.204223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.204263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.204458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.204492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.204693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.204727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.204839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.204872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.205056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.700 [2024-11-20 10:06:13.205090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.700 qpair failed and we were unable to recover it. 00:27:39.700 [2024-11-20 10:06:13.205257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.205293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.205423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.205457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.205636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.205670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.205931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.205965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.206158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.206193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.206393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.206427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.206696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.206731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.206918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.207145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.207179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.207319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.207355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.207480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.207635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.207668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.207936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.207971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.208154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.208187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.208384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.208419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.208542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.208577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.208769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.208803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.209015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.209049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.209310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.209346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.209610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.209644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.209760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.209793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.210038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.210071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.210199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.210242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.210436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.210470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.210738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.210774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.210967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.211002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.211219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.211255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.211520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.211554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.211677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.211712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.211902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.211936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.212056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.212091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.212358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.701 [2024-11-20 10:06:13.212393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.701 qpair failed and we were unable to recover it. 00:27:39.701 [2024-11-20 10:06:13.212580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.212613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.212738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.212772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.213010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.213044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.213180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.213255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.213564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.213683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.213718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.213834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.213867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.214059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.214093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.214213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.214248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.214424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.214458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.214589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.214623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.214800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.214834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.215009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.215042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.215148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.215182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.215454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.215488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.215675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.215708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 Malloc0 00:27:39.702 [2024-11-20 10:06:13.215893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.215926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.216059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.216093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.702 [2024-11-20 10:06:13.216284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.216322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.702 [2024-11-20 10:06:13.216513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.216546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.702 [2024-11-20 10:06:13.216731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.216766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.217012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.217045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.217221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.217256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.217484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.217620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.217654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.217829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.217863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.218039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.218073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.218190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.218231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.218413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.218453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.218721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.218753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.218942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.218975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.219223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.702 [2024-11-20 10:06:13.219258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.702 qpair failed and we were unable to recover it. 00:27:39.702 [2024-11-20 10:06:13.219389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.703 [2024-11-20 10:06:13.219406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.703 [2024-11-20 10:06:13.219423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.703 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.219609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.219642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.219840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.219874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.220117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.220150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.220428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.220463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.220660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.220693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.220959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.220992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.221115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.221148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.221289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.221324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.221514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.221553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.221748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.221782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.221973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.222006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.222188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.222232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.222476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.222509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.222626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.222659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.222843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.222876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.223001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.223034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.223221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.223256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.223485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.223705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.223944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.223978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.224190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.224234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.224432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.224466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.224692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.224726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.224946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.224980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.225102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.225135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.225351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.225387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.225578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.225611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.225797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.225830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.226068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.226101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.226307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.226342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.226540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.226574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.226819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.226853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.227025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.227367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 [2024-11-20 10:06:13.227560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.965 [2024-11-20 10:06:13.227593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.965 qpair failed and we were unable to recover it. 00:27:39.965 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.965 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.966 [2024-11-20 10:06:13.227868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.227903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.966 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.966 [2024-11-20 10:06:13.228147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.228181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.228392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.228426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.228664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.228698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.228938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.228973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.229243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.229432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.229467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.229708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.229741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.229938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.229973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.230178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.230223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.230358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.230390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.230652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.230690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.230878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.230913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.231118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.231152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.231346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.231381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.231664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.231698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.231817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.231851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.231960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.231993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.232166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.232200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.232482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.232514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.232684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.232718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.232961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.232994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.233125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.233158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.233353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.233388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.233578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.233611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.233737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.233771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.234012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.234045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.234233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.234268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.234509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.234542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.234726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.234759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.234951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.234984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.235170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.235213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.235344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.235377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 [2024-11-20 10:06:13.235482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.235515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.966 qpair failed and we were unable to recover it. 00:27:39.966 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.966 [2024-11-20 10:06:13.235660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.966 [2024-11-20 10:06:13.235693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.967 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.967 [2024-11-20 10:06:13.235945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.967 [2024-11-20 10:06:13.235980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.236090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.236128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.236386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.236421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.236545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.236579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.236771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.236805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.237070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.237103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.237294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.237329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.237461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.237668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.237701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.237887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.237920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.238097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.238131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.238334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.238370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.238612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.238645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.238755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.239005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.239039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.239153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.239188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.239382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.239415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.239588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.239621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.239882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.239916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.240050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.240084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.240356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.240391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.240632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.240665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.240890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.240923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.241185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.241227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.241430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.241464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.241594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.241627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.241884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.241917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.242111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.242144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.242343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.242378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.242609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.242642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.242828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.242863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.243050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.243269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.243303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 [2024-11-20 10:06:13.243545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.967 [2024-11-20 10:06:13.243577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.967 qpair failed and we were unable to recover it. 00:27:39.967 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.967 [2024-11-20 10:06:13.243705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.243739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.968 [2024-11-20 10:06:13.243867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.243901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.968 [2024-11-20 10:06:13.244011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.244045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.244172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.244217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.244398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.244430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.244672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.244710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.244901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.244935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.245064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.245097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.245293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.245437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.245469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.245605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.245637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.245831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.245865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.246105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.246139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.246380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.246414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.246519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.246552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.246687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.246720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.246962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.246995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.247182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.247490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.968 [2024-11-20 10:06:13.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa2bc000b90 with addr=10.0.0.2, port=4420 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.247624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.968 [2024-11-20 10:06:13.250128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.968 [2024-11-20 10:06:13.250248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.968 [2024-11-20 10:06:13.250297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.968 [2024-11-20 10:06:13.250321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.968 [2024-11-20 10:06:13.250342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.968 [2024-11-20 10:06:13.250395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.968 10:06:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2815581 00:27:39.968 [2024-11-20 10:06:13.260018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.968 [2024-11-20 10:06:13.260101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.968 [2024-11-20 10:06:13.260132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.968 [2024-11-20 10:06:13.260149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.968 [2024-11-20 10:06:13.260164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.968 [2024-11-20 10:06:13.260198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.270013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.968 [2024-11-20 10:06:13.270083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.968 [2024-11-20 10:06:13.270103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.968 [2024-11-20 10:06:13.270115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.968 [2024-11-20 10:06:13.270124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.968 [2024-11-20 10:06:13.270147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.279996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.968 [2024-11-20 10:06:13.280066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.968 [2024-11-20 10:06:13.280086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.968 [2024-11-20 10:06:13.280094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.968 [2024-11-20 10:06:13.280101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.968 [2024-11-20 10:06:13.280117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.290009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.968 [2024-11-20 10:06:13.290070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.968 [2024-11-20 10:06:13.290085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.968 [2024-11-20 10:06:13.290093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.968 [2024-11-20 10:06:13.290099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.968 [2024-11-20 10:06:13.290115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.968 qpair failed and we were unable to recover it. 00:27:39.968 [2024-11-20 10:06:13.299974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.300042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.300057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.300064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.300071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.300086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.309950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.310010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.310024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.310032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.310039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.310053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.320101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.320212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.320227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.320254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.320261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.320278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.330096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.330192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.330211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.330219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.330225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.330241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.340114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.340171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.340186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.340193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.340200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.340221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.350077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.350182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.350197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.350207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.350214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.350230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.360149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.360222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.360236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.360244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.360250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.360265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.370175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.370247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.370262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.370270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.370276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.370293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.380226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.380309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.380323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.380330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.380337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.380352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.390171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.390255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.390269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.390276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.969 [2024-11-20 10:06:13.390283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.969 [2024-11-20 10:06:13.390298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.969 qpair failed and we were unable to recover it. 00:27:39.969 [2024-11-20 10:06:13.400256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.969 [2024-11-20 10:06:13.400313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.969 [2024-11-20 10:06:13.400327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.969 [2024-11-20 10:06:13.400334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.400342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.400358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.410238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.410298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.410313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.410320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.410327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.410342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.420276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.420345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.420359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.420366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.420372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.420388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.430347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.430399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.430413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.430420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.430426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.430441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.440400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.440463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.440477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.440485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.440491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.440506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.450397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.450481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.450496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.450506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.450513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.450527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.460425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.460489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.460504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.460511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.460517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.460532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.470390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.470444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.470458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.470465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.470471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.470487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.480521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.480574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.480588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.480595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.480601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.480616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.490531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.490586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.490601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.490608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.490614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.490633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.500502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.500555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.500569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.500576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.500582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.500597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.970 [2024-11-20 10:06:13.510579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.970 [2024-11-20 10:06:13.510648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.970 [2024-11-20 10:06:13.510662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.970 [2024-11-20 10:06:13.510670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.970 [2024-11-20 10:06:13.510676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.970 [2024-11-20 10:06:13.510692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.970 qpair failed and we were unable to recover it. 00:27:39.971 [2024-11-20 10:06:13.520557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.971 [2024-11-20 10:06:13.520613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.971 [2024-11-20 10:06:13.520628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.971 [2024-11-20 10:06:13.520635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.971 [2024-11-20 10:06:13.520641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.971 [2024-11-20 10:06:13.520657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.971 qpair failed and we were unable to recover it. 00:27:39.971 [2024-11-20 10:06:13.530711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.971 [2024-11-20 10:06:13.530819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.971 [2024-11-20 10:06:13.530834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.971 [2024-11-20 10:06:13.530841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.971 [2024-11-20 10:06:13.530847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:39.971 [2024-11-20 10:06:13.530862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.971 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.540638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.540694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.540708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.540715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.540722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.540738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.550707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.550771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.550785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.550792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.550798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.550814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.560775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.560835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.560849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.560857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.560863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.560878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.570773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.570828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.570842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.570851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.570859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.570874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.580827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.580879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.580896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.580903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.580910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.580925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.590763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.590818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.590833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.590841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.590847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.590862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.600787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.600846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.600859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.600866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.600873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.600888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.610815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.610869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.610883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.610890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.610897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.610912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.230 [2024-11-20 10:06:13.620925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.230 [2024-11-20 10:06:13.620984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.230 [2024-11-20 10:06:13.620998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.230 [2024-11-20 10:06:13.621005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.230 [2024-11-20 10:06:13.621015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.230 [2024-11-20 10:06:13.621030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.230 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.630870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.630942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.630956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.630963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.630969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.630984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.640949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.641007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.641021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.641028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.641034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.641049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.650994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.651053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.651068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.651075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.651081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.651097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.661035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.661098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.661113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.661121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.661127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.661142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.671112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.671167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.671182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.671189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.671195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.671215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.681025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.681087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.681100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.681107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.681114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.681129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.691129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.691185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.691200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.691211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.691217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.691233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.701143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.701195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.701222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.701230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.701236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.701252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.711165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.711221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.711240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.711248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.711254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.711269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.721214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.721271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.721286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.721293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.721299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.721314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.231 qpair failed and we were unable to recover it. 00:27:40.231 [2024-11-20 10:06:13.731235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.231 [2024-11-20 10:06:13.731288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.231 [2024-11-20 10:06:13.731301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.231 [2024-11-20 10:06:13.731308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.231 [2024-11-20 10:06:13.731315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.231 [2024-11-20 10:06:13.731331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.741254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.741332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.741347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.741355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.741361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.741377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.751278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.751333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.751347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.751355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.751365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.751381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.761326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.761388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.761402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.761410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.761416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.761432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.771346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.771401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.771415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.771422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.771429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.771444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.781388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.781452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.781469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.781476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.781482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.781498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.791399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.791448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.791462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.791469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.791476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.791491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.232 [2024-11-20 10:06:13.801436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.232 [2024-11-20 10:06:13.801492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.232 [2024-11-20 10:06:13.801506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.232 [2024-11-20 10:06:13.801513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.232 [2024-11-20 10:06:13.801520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.232 [2024-11-20 10:06:13.801534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.232 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.811471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.811525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.811539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.811546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.811553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.811568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.821508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.821577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.821592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.821599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.821606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.821621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.831435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.831497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.831514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.831520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.831527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.831542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.841547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.841603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.841622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.841629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.841636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.841651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.851500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.851561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.851576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.851584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.851590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.851606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.861628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.861680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.861695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.861702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.861709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.861725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.871605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.871662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.871676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.871684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.871690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.871706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.881654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.881713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.881727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.881738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.881744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.881759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.891600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.891668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.891682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.891690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.891697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.891711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.901706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.901760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.901774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.901781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.901787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.901802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.911712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.911764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.911779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.911785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.911792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.911807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.921784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.492 [2024-11-20 10:06:13.921860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.492 [2024-11-20 10:06:13.921874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.492 [2024-11-20 10:06:13.921881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.492 [2024-11-20 10:06:13.921887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.492 [2024-11-20 10:06:13.921903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.492 qpair failed and we were unable to recover it. 00:27:40.492 [2024-11-20 10:06:13.931723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.931780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.931796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.931803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.931809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.931824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.941826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.941892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.941906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.941913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.941920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.941934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.951863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.951911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.951924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.951931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.951938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.951953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.961947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.962005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.962019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.962026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.962032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.962047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.971916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.971975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.971989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.971996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.972003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.972018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.981868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.981934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.981948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.981955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.981962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.981976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:13.991970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:13.992073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:13.992087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:13.992094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:13.992101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:13.992116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:14.002003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:14.002061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:14.002075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:14.002082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:14.002089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:14.002104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:14.011960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:14.012013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:14.012027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:14.012037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:14.012044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:14.012059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:14.022102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:14.022213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:14.022228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:14.022235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:14.022241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:14.022256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:14.032073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:14.032124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.493 [2024-11-20 10:06:14.032137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.493 [2024-11-20 10:06:14.032144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.493 [2024-11-20 10:06:14.032151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.493 [2024-11-20 10:06:14.032166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.493 qpair failed and we were unable to recover it. 00:27:40.493 [2024-11-20 10:06:14.042046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.493 [2024-11-20 10:06:14.042101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.494 [2024-11-20 10:06:14.042115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.494 [2024-11-20 10:06:14.042122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.494 [2024-11-20 10:06:14.042129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.494 [2024-11-20 10:06:14.042144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.494 qpair failed and we were unable to recover it. 00:27:40.494 [2024-11-20 10:06:14.052227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.494 [2024-11-20 10:06:14.052292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.494 [2024-11-20 10:06:14.052307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.494 [2024-11-20 10:06:14.052315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.494 [2024-11-20 10:06:14.052322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.494 [2024-11-20 10:06:14.052341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.494 qpair failed and we were unable to recover it. 00:27:40.494 [2024-11-20 10:06:14.062133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.494 [2024-11-20 10:06:14.062186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.494 [2024-11-20 10:06:14.062200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.494 [2024-11-20 10:06:14.062224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.494 [2024-11-20 10:06:14.062231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.494 [2024-11-20 10:06:14.062246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.494 qpair failed and we were unable to recover it. 00:27:40.753 [2024-11-20 10:06:14.072259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.072314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.072328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.072336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.072342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.072357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.082256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.082316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.082330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.082337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.082344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.082359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.092178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.092235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.092249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.092256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.092263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.092278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.102334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.102440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.102454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.102461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.102467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.102482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.112301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.112352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.112366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.112373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.112379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.112395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.122345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.122412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.122425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.122433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.122439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.122455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.132388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.132442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.132456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.132463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.132469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.132484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.142382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.142439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.142456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.142463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.142469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.142484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.152414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.152487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.152501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.152508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.152514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.152529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.162398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.162492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.162506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.162514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.162520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.162535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.172475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.172537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.172552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.172559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.172565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.172579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.182496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.182548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.182562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.182569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.182579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.754 [2024-11-20 10:06:14.182594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.754 qpair failed and we were unable to recover it. 00:27:40.754 [2024-11-20 10:06:14.192514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.754 [2024-11-20 10:06:14.192618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.754 [2024-11-20 10:06:14.192633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.754 [2024-11-20 10:06:14.192640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.754 [2024-11-20 10:06:14.192646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.192661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.202556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.202612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.202626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.202633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.202639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.202654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.212587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.212646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.212661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.212669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.212675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.212690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.222599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.222697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.222710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.222718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.222724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.222739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.232617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.232667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.232680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.232688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.232695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.232710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.242706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.242766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.242780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.242787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.242794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.242808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.252700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.252751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.252764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.252772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.252778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.252793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.262708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.262762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.262776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.262783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.262790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.262806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.272729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.272783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.272800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.272807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.272814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.272828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.282813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.282871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.282885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.282893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.282900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.282915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.292809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.292864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.292878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.292885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.292892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.292907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.302826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.302882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.302897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.302904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.302911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.302926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.312858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.312910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.312925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.312931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.755 [2024-11-20 10:06:14.312941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.755 [2024-11-20 10:06:14.312957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.755 qpair failed and we were unable to recover it. 00:27:40.755 [2024-11-20 10:06:14.322885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.755 [2024-11-20 10:06:14.322942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.755 [2024-11-20 10:06:14.322956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.755 [2024-11-20 10:06:14.322964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.756 [2024-11-20 10:06:14.322972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:40.756 [2024-11-20 10:06:14.322988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.756 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.332918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.332975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.333002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.333010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.333017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.333037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.342957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.343014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.343028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.343036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.343042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.343057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.353013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.353068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.353082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.353090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.353097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.353112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.363075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.363149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.363164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.363171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.363177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.363192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.373028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.373086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.373101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.373108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.373115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.373129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.383053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.383104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.383118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.383125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.383131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.383147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.393081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.393132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.015 [2024-11-20 10:06:14.393146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.015 [2024-11-20 10:06:14.393153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.015 [2024-11-20 10:06:14.393160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.015 [2024-11-20 10:06:14.393175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.015 qpair failed and we were unable to recover it. 00:27:41.015 [2024-11-20 10:06:14.403119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.015 [2024-11-20 10:06:14.403175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.403192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.403200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.403211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.403227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.413150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.413209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.413223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.413231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.413238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.413252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.423169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.423228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.423242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.423250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.423256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.423271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.433207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.433263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.433278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.433285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.433292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.433307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.443217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.443274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.443288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.443299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.443305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.443320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.453262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.453318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.453331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.453338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.453345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.453360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.463290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.463343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.463358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.463365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.463371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.463386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.473313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.473368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.473382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.473390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.473397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.473412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.483350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.483406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.483420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.483427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.483434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.483452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.493408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.493465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.493479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.493487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.493493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.493508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.503408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.503461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.503475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.503482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.503488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.503503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.513434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.513490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.513504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.513511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.513518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.513533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.523473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.523529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.016 [2024-11-20 10:06:14.523543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.016 [2024-11-20 10:06:14.523550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.016 [2024-11-20 10:06:14.523556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.016 [2024-11-20 10:06:14.523572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.016 qpair failed and we were unable to recover it. 00:27:41.016 [2024-11-20 10:06:14.533500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.016 [2024-11-20 10:06:14.533558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.533573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.533580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.533586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.533601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.017 [2024-11-20 10:06:14.543528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.017 [2024-11-20 10:06:14.543580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.543594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.543601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.543607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.543623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.017 [2024-11-20 10:06:14.553543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.017 [2024-11-20 10:06:14.553621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.553636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.553643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.553649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.553664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.017 [2024-11-20 10:06:14.563607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.017 [2024-11-20 10:06:14.563666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.563680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.563687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.563694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.563709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.017 [2024-11-20 10:06:14.573607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.017 [2024-11-20 10:06:14.573664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.573678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.573689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.573695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.573710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.017 [2024-11-20 10:06:14.583641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.017 [2024-11-20 10:06:14.583695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.017 [2024-11-20 10:06:14.583709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.017 [2024-11-20 10:06:14.583716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.017 [2024-11-20 10:06:14.583723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.017 [2024-11-20 10:06:14.583738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.017 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.593661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.593716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.593730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.593737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.593743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.593759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.603720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.603788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.603803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.603810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.603816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.603831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.613754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.613814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.613829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.613837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.613844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.613862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.623767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.623823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.623837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.623844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.623851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.623866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.633793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.633872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.633886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.633894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.633900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.633915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.643832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.643891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.643905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.643912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.643919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.643934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.653894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.653969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.653984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.653992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.653998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.654014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.663916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.663972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.663986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.663993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.663999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.664015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.673910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.673962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.673975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.673983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.673989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.674004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.683955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.684014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.684028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.684035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.684042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.684057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.693965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.276 [2024-11-20 10:06:14.694020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.276 [2024-11-20 10:06:14.694035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.276 [2024-11-20 10:06:14.694042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.276 [2024-11-20 10:06:14.694049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.276 [2024-11-20 10:06:14.694064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.276 qpair failed and we were unable to recover it. 00:27:41.276 [2024-11-20 10:06:14.704011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.704068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.704086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.704093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.704100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.704115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.714003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.714062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.714076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.714083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.714090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.714105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.724065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.724139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.724153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.724161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.724167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.724182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.734088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.734149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.734163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.734171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.734177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.734192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.744038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.744103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.744117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.744125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.744137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.744152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.754073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.754128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.754142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.754150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.754156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.754172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.764103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.764161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.764175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.764182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.764190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.764211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.774199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.774285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.774299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.774307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.774314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.774329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.784220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.784276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.784290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.784297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.784304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.784319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.794267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.794338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.794353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.794360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.794368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.794383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.804283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.804340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.804353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.804361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.804367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.804381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.814358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.814414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.814428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.814436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.814442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.814457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.824365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.277 [2024-11-20 10:06:14.824431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.277 [2024-11-20 10:06:14.824445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.277 [2024-11-20 10:06:14.824452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.277 [2024-11-20 10:06:14.824458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.277 [2024-11-20 10:06:14.824474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.277 qpair failed and we were unable to recover it. 00:27:41.277 [2024-11-20 10:06:14.834371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.278 [2024-11-20 10:06:14.834428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.278 [2024-11-20 10:06:14.834444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.278 [2024-11-20 10:06:14.834452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.278 [2024-11-20 10:06:14.834458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.278 [2024-11-20 10:06:14.834473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.278 qpair failed and we were unable to recover it. 00:27:41.278 [2024-11-20 10:06:14.844405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.278 [2024-11-20 10:06:14.844461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.278 [2024-11-20 10:06:14.844475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.278 [2024-11-20 10:06:14.844482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.278 [2024-11-20 10:06:14.844488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.278 [2024-11-20 10:06:14.844503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.278 qpair failed and we were unable to recover it. 00:27:41.537 [2024-11-20 10:06:14.854435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.537 [2024-11-20 10:06:14.854490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.537 [2024-11-20 10:06:14.854503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.537 [2024-11-20 10:06:14.854510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.537 [2024-11-20 10:06:14.854517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.537 [2024-11-20 10:06:14.854531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.537 qpair failed and we were unable to recover it. 00:27:41.537 [2024-11-20 10:06:14.864461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.537 [2024-11-20 10:06:14.864517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.537 [2024-11-20 10:06:14.864531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.537 [2024-11-20 10:06:14.864539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.537 [2024-11-20 10:06:14.864546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.537 [2024-11-20 10:06:14.864561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.537 qpair failed and we were unable to recover it. 00:27:41.537 [2024-11-20 10:06:14.874484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.537 [2024-11-20 10:06:14.874537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.537 [2024-11-20 10:06:14.874551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.537 [2024-11-20 10:06:14.874558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.537 [2024-11-20 10:06:14.874567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.537 [2024-11-20 10:06:14.874583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.537 qpair failed and we were unable to recover it. 00:27:41.537 [2024-11-20 10:06:14.884451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.537 [2024-11-20 10:06:14.884510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.537 [2024-11-20 10:06:14.884524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.884532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.884539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.884553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.894497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.894554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.894568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.894576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.894584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.894598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.904564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.904615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.904629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.904636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.904643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.904658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.914572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.914625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.914639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.914645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.914652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.914667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.924641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.924704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.924719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.924727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.924733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.924749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.934565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.934627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.934642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.934649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.934656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.934672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.944652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.944718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.944732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.944739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.944746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.944761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.954702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.954765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.954779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.954786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.954792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.954807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.964711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.964769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.964786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.964794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.964800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.964815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.974687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.974749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.974764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.974771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.974778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.974793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.984866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.984954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.984968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.984976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.984983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.984998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:14.994766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:14.994845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:14.994860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:14.994867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:14.994873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:14.994890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:15.004770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:15.004824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.538 [2024-11-20 10:06:15.004839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.538 [2024-11-20 10:06:15.004849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.538 [2024-11-20 10:06:15.004856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.538 [2024-11-20 10:06:15.004871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.538 qpair failed and we were unable to recover it. 00:27:41.538 [2024-11-20 10:06:15.014898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.538 [2024-11-20 10:06:15.014963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.014978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.014986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.014992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.015008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.024919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.024970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.024984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.024991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.024998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.025013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.035015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.035070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.035085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.035093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.035099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.035114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.044994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.045052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.045066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.045074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.045081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.045100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.055002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.055059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.055073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.055081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.055088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.055103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.065013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.065066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.065080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.065087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.065093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.065109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.075039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.075118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.075132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.075140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.075146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.075162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.085080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.085139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.085153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.085161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.085168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.085183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.095120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.095182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.095197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.095208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.095216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.095231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.539 [2024-11-20 10:06:15.105155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.539 [2024-11-20 10:06:15.105214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.539 [2024-11-20 10:06:15.105229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.539 [2024-11-20 10:06:15.105237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.539 [2024-11-20 10:06:15.105243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.539 [2024-11-20 10:06:15.105259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.539 qpair failed and we were unable to recover it. 00:27:41.799 [2024-11-20 10:06:15.115148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.799 [2024-11-20 10:06:15.115208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.799 [2024-11-20 10:06:15.115223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.799 [2024-11-20 10:06:15.115230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.799 [2024-11-20 10:06:15.115236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.799 [2024-11-20 10:06:15.115251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.799 qpair failed and we were unable to recover it. 00:27:41.799 [2024-11-20 10:06:15.125224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.799 [2024-11-20 10:06:15.125281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.799 [2024-11-20 10:06:15.125296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.799 [2024-11-20 10:06:15.125303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.799 [2024-11-20 10:06:15.125309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.799 [2024-11-20 10:06:15.125326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.799 qpair failed and we were unable to recover it. 00:27:41.799 [2024-11-20 10:06:15.135232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.799 [2024-11-20 10:06:15.135290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.799 [2024-11-20 10:06:15.135304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.799 [2024-11-20 10:06:15.135314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.799 [2024-11-20 10:06:15.135320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.799 [2024-11-20 10:06:15.135335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.799 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.145286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.145343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.145357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.145365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.145372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.145387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.155285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.155343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.155357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.155365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.155371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.155386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.165331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.165385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.165399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.165406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.165413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.165428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.175335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.175392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.175406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.175413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.175420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.175438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.185360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.185415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.185429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.185437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.185443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.185458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.195339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.195394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.195408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.195416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.195435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.195450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.205416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.205473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.205487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.205494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.205501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.205516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.215427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.215488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.215502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.215509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.215515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.215530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.225465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.225543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.225557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.225564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.225570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.225585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.235494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.800 [2024-11-20 10:06:15.235550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.800 [2024-11-20 10:06:15.235564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.800 [2024-11-20 10:06:15.235571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.800 [2024-11-20 10:06:15.235578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.800 [2024-11-20 10:06:15.235593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.800 qpair failed and we were unable to recover it. 00:27:41.800 [2024-11-20 10:06:15.245553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.245614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.245629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.245637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.245643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.245659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.255529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.255589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.255604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.255611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.255618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.255633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.265584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.265637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.265654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.265661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.265667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.265683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.275603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.275668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.275683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.275690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.275696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.275712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.285640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.285698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.285712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.285720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.285726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.285741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.295664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.295771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.295785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.295792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.295798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.295813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.305776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.305829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.305844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.305851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.305861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.305876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.315769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.315822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.315837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.315844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.315850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.315866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.325676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.325739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.325753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.325761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.325767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.325782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.335774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.335845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.335859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.335866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.335872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.335887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.345800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.345856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.345870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.345877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.345884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.801 [2024-11-20 10:06:15.345899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.801 qpair failed and we were unable to recover it. 00:27:41.801 [2024-11-20 10:06:15.355802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.801 [2024-11-20 10:06:15.355858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.801 [2024-11-20 10:06:15.355873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.801 [2024-11-20 10:06:15.355880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.801 [2024-11-20 10:06:15.355886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.802 [2024-11-20 10:06:15.355901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.802 qpair failed and we were unable to recover it. 00:27:41.802 [2024-11-20 10:06:15.365867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.802 [2024-11-20 10:06:15.365923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.802 [2024-11-20 10:06:15.365937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.802 [2024-11-20 10:06:15.365945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.802 [2024-11-20 10:06:15.365951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.802 [2024-11-20 10:06:15.365967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.802 qpair failed and we were unable to recover it. 00:27:41.802 [2024-11-20 10:06:15.375915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.802 [2024-11-20 10:06:15.375967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.802 [2024-11-20 10:06:15.375981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.802 [2024-11-20 10:06:15.375988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.802 [2024-11-20 10:06:15.375993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:41.802 [2024-11-20 10:06:15.376009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:41.802 qpair failed and we were unable to recover it. 00:27:42.061 [2024-11-20 10:06:15.385958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.061 [2024-11-20 10:06:15.386025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.061 [2024-11-20 10:06:15.386040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.061 [2024-11-20 10:06:15.386047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.061 [2024-11-20 10:06:15.386053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.061 [2024-11-20 10:06:15.386068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.061 qpair failed and we were unable to recover it. 00:27:42.061 [2024-11-20 10:06:15.396000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.061 [2024-11-20 10:06:15.396049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.061 [2024-11-20 10:06:15.396067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.061 [2024-11-20 10:06:15.396074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.061 [2024-11-20 10:06:15.396081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.061 [2024-11-20 10:06:15.396096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.061 qpair failed and we were unable to recover it. 00:27:42.061 [2024-11-20 10:06:15.405989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.061 [2024-11-20 10:06:15.406048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.061 [2024-11-20 10:06:15.406062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.061 [2024-11-20 10:06:15.406070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.061 [2024-11-20 10:06:15.406076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.061 [2024-11-20 10:06:15.406091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.061 qpair failed and we were unable to recover it. 00:27:42.061 [2024-11-20 10:06:15.416017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.061 [2024-11-20 10:06:15.416072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.061 [2024-11-20 10:06:15.416086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.061 [2024-11-20 10:06:15.416093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.061 [2024-11-20 10:06:15.416099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.061 [2024-11-20 10:06:15.416114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.061 qpair failed and we were unable to recover it. 00:27:42.061 [2024-11-20 10:06:15.426062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.426117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.426131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.426138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.426145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.426160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.436095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.436149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.436163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.436171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.436182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.436197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.446062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.446119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.446133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.446140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.446146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.446162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.456158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.456216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.456231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.456238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.456244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.456259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.466172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.466232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.466245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.466253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.466259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.466275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.476197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.476249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.476263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.476270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.476276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.476292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.486248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.486313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.486327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.486335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.486341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.486356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.496304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.496362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.496376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.496383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.496389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.496405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.506312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.506366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.506379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.506386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.506392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.506407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.516233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.516292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.516306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.516313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.516320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.516335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.526347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.526413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.526429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.526437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.526443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.526458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.062 [2024-11-20 10:06:15.536288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.062 [2024-11-20 10:06:15.536345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.062 [2024-11-20 10:06:15.536359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.062 [2024-11-20 10:06:15.536366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.062 [2024-11-20 10:06:15.536372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.062 [2024-11-20 10:06:15.536387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.062 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.546359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.546415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.546428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.546435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.546442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.546456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.556422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.556471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.556485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.556492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.556499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.556514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.566477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.566531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.566545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.566555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.566562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.566576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.576507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.576561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.576575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.576582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.576589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.576604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.586526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.586613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.586628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.586635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.586641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.586656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.596546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.596600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.596614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.596621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.596627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.596642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.606581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.606637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.606651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.606658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.606664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.606682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.616610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.616666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.616680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.616687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.616694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.616709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.626645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.626720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.626733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.626740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.626746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.626762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.063 [2024-11-20 10:06:15.636658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.063 [2024-11-20 10:06:15.636713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.063 [2024-11-20 10:06:15.636729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.063 [2024-11-20 10:06:15.636739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.063 [2024-11-20 10:06:15.636747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.063 [2024-11-20 10:06:15.636763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.063 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.646681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.646739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.646752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.646760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.646766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.646781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.656650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.656711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.656726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.656734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.656741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.656756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.666681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.666790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.666804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.666811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.666818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.666833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.676782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.676836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.676850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.676857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.676863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.676878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.686818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.686875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.686890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.686897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.686904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.686919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.696807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.696873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.696887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.696898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.696904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.323 [2024-11-20 10:06:15.696920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.323 qpair failed and we were unable to recover it. 00:27:42.323 [2024-11-20 10:06:15.706866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.323 [2024-11-20 10:06:15.706936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.323 [2024-11-20 10:06:15.706950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.323 [2024-11-20 10:06:15.706957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.323 [2024-11-20 10:06:15.706964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.706980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.716909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.716984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.716999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.717006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.717013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.717027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.726923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.726979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.726994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.727001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.727008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.727023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.736949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.737004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.737019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.737025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.737032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.737050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.746977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.747033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.747046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.747054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.747060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.747075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.757006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.757055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.757069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.757076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.757084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.757099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.767041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.767095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.767109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.767115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.767122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.767137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.777113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.777221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.777236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.777243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.777250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.777264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.787093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.787146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.787161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.787168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.787174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.787189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.797112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.797192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.797211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.797218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.797225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.797239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.807152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.807210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.807224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.807232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.807238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.807253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.324 qpair failed and we were unable to recover it. 00:27:42.324 [2024-11-20 10:06:15.817209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.324 [2024-11-20 10:06:15.817263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.324 [2024-11-20 10:06:15.817277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.324 [2024-11-20 10:06:15.817284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.324 [2024-11-20 10:06:15.817291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.324 [2024-11-20 10:06:15.817306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.827219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.827291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.827308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.827316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.827322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.827338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.837277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.837332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.837346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.837353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.837359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.837375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.847269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.847325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.847339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.847346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.847353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.847367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.857283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.857337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.857351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.857358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.857365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.857380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.867308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.867361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.867375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.867382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.867392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.867408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.877339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.877395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.877410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.877417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.877423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.877438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.887418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.887472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.887485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.887493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.887499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.887514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.325 [2024-11-20 10:06:15.897400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.325 [2024-11-20 10:06:15.897457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.325 [2024-11-20 10:06:15.897471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.325 [2024-11-20 10:06:15.897479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.325 [2024-11-20 10:06:15.897486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.325 [2024-11-20 10:06:15.897501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.325 qpair failed and we were unable to recover it. 00:27:42.584 [2024-11-20 10:06:15.907458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.584 [2024-11-20 10:06:15.907511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.584 [2024-11-20 10:06:15.907525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.584 [2024-11-20 10:06:15.907532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.907538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.907554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.917489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.917544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.917558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.917565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.917572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.917586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.927445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.927498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.927512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.927519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.927526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.927541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.937574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.937681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.937697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.937705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.937713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.937728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.947540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.947595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.947609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.947616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.947623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.947638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.957565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.957619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.957636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.957643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.957650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.957664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.967604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.967663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.967676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.967683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.967690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.967706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.977624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.977680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.977694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.977701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.977708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.977723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.987673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.987755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.987769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.987776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.987783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.987798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:15.997655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:15.997709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:15.997723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:15.997731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:15.997740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:15.997756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:16.007715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:16.007772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.585 [2024-11-20 10:06:16.007786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.585 [2024-11-20 10:06:16.007793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.585 [2024-11-20 10:06:16.007800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.585 [2024-11-20 10:06:16.007815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.585 qpair failed and we were unable to recover it. 00:27:42.585 [2024-11-20 10:06:16.017704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.585 [2024-11-20 10:06:16.017758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.017771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.017779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.017785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.017800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.027812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.027865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.027879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.027886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.027893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.027908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.037783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.037835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.037850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.037857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.037863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.037878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.047818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.047877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.047890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.047898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.047904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.047919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.057869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.057961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.057976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.057983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.057990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.058005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.067912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.067978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.067992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.068000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.068006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.068022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.077822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.077886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.077901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.077909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.077916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.077932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.087932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.087989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.088007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.088015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.088021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.088036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.097978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.098037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.098050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.098058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.098064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.098080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.108011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.108063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.108077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.108085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.108091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.108106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.118067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.586 [2024-11-20 10:06:16.118121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.586 [2024-11-20 10:06:16.118135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.586 [2024-11-20 10:06:16.118142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.586 [2024-11-20 10:06:16.118149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.586 [2024-11-20 10:06:16.118164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.586 qpair failed and we were unable to recover it. 00:27:42.586 [2024-11-20 10:06:16.128045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.587 [2024-11-20 10:06:16.128129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.587 [2024-11-20 10:06:16.128144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.587 [2024-11-20 10:06:16.128157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.587 [2024-11-20 10:06:16.128163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.587 [2024-11-20 10:06:16.128178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.587 qpair failed and we were unable to recover it. 00:27:42.587 [2024-11-20 10:06:16.138079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.587 [2024-11-20 10:06:16.138134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.587 [2024-11-20 10:06:16.138147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.587 [2024-11-20 10:06:16.138154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.587 [2024-11-20 10:06:16.138161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.587 [2024-11-20 10:06:16.138176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.587 qpair failed and we were unable to recover it. 00:27:42.587 [2024-11-20 10:06:16.148105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.587 [2024-11-20 10:06:16.148161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.587 [2024-11-20 10:06:16.148175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.587 [2024-11-20 10:06:16.148182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.587 [2024-11-20 10:06:16.148189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.587 [2024-11-20 10:06:16.148210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.587 qpair failed and we were unable to recover it. 00:27:42.587 [2024-11-20 10:06:16.158100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.587 [2024-11-20 10:06:16.158156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.587 [2024-11-20 10:06:16.158170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.587 [2024-11-20 10:06:16.158177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.587 [2024-11-20 10:06:16.158184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.587 [2024-11-20 10:06:16.158199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.587 qpair failed and we were unable to recover it. 00:27:42.869 [2024-11-20 10:06:16.168094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.869 [2024-11-20 10:06:16.168150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.869 [2024-11-20 10:06:16.168164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.869 [2024-11-20 10:06:16.168171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.869 [2024-11-20 10:06:16.168178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.869 [2024-11-20 10:06:16.168196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.869 qpair failed and we were unable to recover it. 00:27:42.869 [2024-11-20 10:06:16.178103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.869 [2024-11-20 10:06:16.178166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.869 [2024-11-20 10:06:16.178182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.869 [2024-11-20 10:06:16.178191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.869 [2024-11-20 10:06:16.178199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.869 [2024-11-20 10:06:16.178220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.869 qpair failed and we were unable to recover it. 00:27:42.869 [2024-11-20 10:06:16.188158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.869 [2024-11-20 10:06:16.188240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.869 [2024-11-20 10:06:16.188257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.869 [2024-11-20 10:06:16.188265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.188272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.188289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.198234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.198299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.198313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.198320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.198326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.198342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.208193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.208259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.208273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.208281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.208287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.208302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.218246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.218306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.218320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.218327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.218333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.218348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.228326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.228411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.228425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.228432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.228438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.228454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.238375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.238431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.238445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.238452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.238459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.238474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.248306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.248362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.248376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.248383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.248390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.248405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.258345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.258403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.258417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.258427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.258434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.258449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.268374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.268457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.268471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.268478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.268484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.268500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.278385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.278451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.278465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.278473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.278479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.278494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.288442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.288508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.288522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.288529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.288536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.288551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.298530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.298597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.298611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.298618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.298625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.298643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.308541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.308608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.308622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.308630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.308636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.308651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.318518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.318571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.318585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.318592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.318598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.318613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.328586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.328645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.328659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.328667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.328674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.328689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.338631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.338687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.338701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.870 [2024-11-20 10:06:16.338707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.870 [2024-11-20 10:06:16.338714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.870 [2024-11-20 10:06:16.338729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.870 qpair failed and we were unable to recover it. 00:27:42.870 [2024-11-20 10:06:16.348688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.870 [2024-11-20 10:06:16.348769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.870 [2024-11-20 10:06:16.348783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.348790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.348797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.348812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.358697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.358748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.358762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.358769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.358776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.358791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.368667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.368725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.368739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.368746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.368753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.368768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.378727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.378790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.378804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.378812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.378818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.378833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.388759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.388816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.388834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.388841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.388848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.388863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.398799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.398868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.398882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.398890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.398897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.398912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.408841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.408941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.408955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.408962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.408969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.408983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:42.871 [2024-11-20 10:06:16.418794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.871 [2024-11-20 10:06:16.418847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.871 [2024-11-20 10:06:16.418860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.871 [2024-11-20 10:06:16.418868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.871 [2024-11-20 10:06:16.418874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:42.871 [2024-11-20 10:06:16.418889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.871 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.428939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.428995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.429009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.429017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.429027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.429043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.438953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.439008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.439022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.439029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.439036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.439051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.448956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.449047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.449061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.449069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.449075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.449090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.458960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.459017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.459031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.459039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.459046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.459061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.468942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.468997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.469011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.469019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.469026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.469041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.479029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.130 [2024-11-20 10:06:16.479084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.130 [2024-11-20 10:06:16.479098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.130 [2024-11-20 10:06:16.479105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.130 [2024-11-20 10:06:16.479112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.130 [2024-11-20 10:06:16.479127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.130 qpair failed and we were unable to recover it. 00:27:43.130 [2024-11-20 10:06:16.489101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.489157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.489171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.489178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.489185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.489199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.499100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.499163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.499177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.499185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.499192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.499212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.509073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.509129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.509143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.509150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.509156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.509172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.519143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.519192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.519214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.519222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.519228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.519244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.529205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.529265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.529279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.529286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.529292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.529308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.539257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.539319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.539332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.539339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.539346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.539360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.549232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.549293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.549308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.549316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.549322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.549337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.559256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.559343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.559357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.559365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.559374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.559390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.569303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.569373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.569387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.569394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.569400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.569416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.579333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.579398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.579413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.579420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.579426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.131 [2024-11-20 10:06:16.579441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.131 qpair failed and we were unable to recover it. 00:27:43.131 [2024-11-20 10:06:16.589359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.131 [2024-11-20 10:06:16.589421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.131 [2024-11-20 10:06:16.589436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.131 [2024-11-20 10:06:16.589444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.131 [2024-11-20 10:06:16.589450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.589466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.599389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.599442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.599456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.599463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.599470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.599485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.609449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.609506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.609520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.609527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.609534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.609549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.619439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.619492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.619505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.619512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.619518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.619534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.629466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.629531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.629545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.629552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.629559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.629574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.639493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.639543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.639556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.639563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.639569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.639585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.649499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.649564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.649579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.649586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.649593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.649610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.659548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.659632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.659647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.659654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.659661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.659676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.669600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.669688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.669702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.669710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.669716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.669732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.679629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.679690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.679704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.679712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.679719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.679734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.689632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.689687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.689700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.689711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.689717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.132 [2024-11-20 10:06:16.689732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.132 qpair failed and we were unable to recover it. 00:27:43.132 [2024-11-20 10:06:16.699668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.132 [2024-11-20 10:06:16.699723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.132 [2024-11-20 10:06:16.699737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.132 [2024-11-20 10:06:16.699744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.132 [2024-11-20 10:06:16.699751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.133 [2024-11-20 10:06:16.699766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.133 qpair failed and we were unable to recover it. 00:27:43.392 [2024-11-20 10:06:16.709704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.392 [2024-11-20 10:06:16.709763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.392 [2024-11-20 10:06:16.709778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.392 [2024-11-20 10:06:16.709785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.392 [2024-11-20 10:06:16.709791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.392 [2024-11-20 10:06:16.709807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.392 qpair failed and we were unable to recover it. 00:27:43.392 [2024-11-20 10:06:16.719707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.392 [2024-11-20 10:06:16.719764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.719778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.719785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.719792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.719807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.729752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.729810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.729824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.729831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.729838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.729856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.739782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.739837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.739850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.739857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.739863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.739878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.749809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.749863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.749877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.749883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.749890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.749905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.759765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.759819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.759833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.759840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.759847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.759862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.769851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.769907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.769921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.769928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.769935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.769950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.779888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.779946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.779960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.779967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.779974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.779989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.789913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.789966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.789980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.789987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.789994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.790009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.799937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.799992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.800007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.800014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.800020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.800035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.809975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.810065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.810079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.810086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.810092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.810106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.819996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.820055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.820069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.820080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.820086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.820102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.829944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.830007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.830021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.830029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.830036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.830051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.840082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.393 [2024-11-20 10:06:16.840138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.393 [2024-11-20 10:06:16.840152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.393 [2024-11-20 10:06:16.840160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.393 [2024-11-20 10:06:16.840166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.393 [2024-11-20 10:06:16.840181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.393 qpair failed and we were unable to recover it. 00:27:43.393 [2024-11-20 10:06:16.850105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.850163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.850176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.850183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.850191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.850209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.860136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.860235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.860249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.860257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.860263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.860283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.870138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.870194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.870226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.870233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.870240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.870255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.880172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.880231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.880245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.880252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.880258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.880274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.890215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.890272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.890287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.890294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.890300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.890315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.900234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.900289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.900302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.900310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.900316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.900331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.910270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.910348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.910363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.910370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.910376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.910391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.920274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.920329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.920343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.920351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.920358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.920373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.930327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.930384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.930398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.930405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.930412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.930428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.940346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.940402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.940416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.940424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.940430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.940446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.950373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.950429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.950446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.950454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.950461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.950476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.394 [2024-11-20 10:06:16.960437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.394 [2024-11-20 10:06:16.960491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.394 [2024-11-20 10:06:16.960505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.394 [2024-11-20 10:06:16.960512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.394 [2024-11-20 10:06:16.960519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.394 [2024-11-20 10:06:16.960534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.394 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:16.970448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:16.970519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:16.970533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:16.970541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:16.970547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:16.970563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:16.980471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:16.980525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:16.980539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:16.980546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:16.980552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:16.980568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:16.990518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:16.990576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:16.990590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:16.990598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:16.990607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:16.990622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.000568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.000623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.000637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.000646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.000653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.000667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.010599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.010656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.010670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.010676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.010683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.010698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.020609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.020667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.020681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.020689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.020695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.020710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.030612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.030662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.030676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.030683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.030689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.030706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.040630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.040683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.040697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.040704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.040711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.040726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.050672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.654 [2024-11-20 10:06:17.050731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.654 [2024-11-20 10:06:17.050745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.654 [2024-11-20 10:06:17.050752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.654 [2024-11-20 10:06:17.050759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.654 [2024-11-20 10:06:17.050774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.654 qpair failed and we were unable to recover it. 00:27:43.654 [2024-11-20 10:06:17.060683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.060739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.060753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.060760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.060766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.060781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.070726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.070780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.070795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.070802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.070809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.070824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.080764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.080835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.080854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.080861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.080868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.080883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.090814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.090885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.090900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.090907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.090914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.090929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.100827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.100883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.100897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.100904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.100910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.100925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.110853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.110906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.110920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.110927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.110933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.110948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.120878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.120936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.120949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.120957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.120967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.120981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.130933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.130994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.131008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.131015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.131021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.131036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.140947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.141003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.141017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.141024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.141030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.141045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.150996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.151061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.151076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.151083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.151089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.151104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.655 [2024-11-20 10:06:17.161001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.655 [2024-11-20 10:06:17.161052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.655 [2024-11-20 10:06:17.161066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.655 [2024-11-20 10:06:17.161073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.655 [2024-11-20 10:06:17.161079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.655 [2024-11-20 10:06:17.161094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.655 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.171019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.171099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.171114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.171122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.171129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.171143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.181087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.181144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.181158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.181166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.181173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.181188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.191092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.191147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.191161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.191168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.191174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.191189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.201106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.201162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.201176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.201184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.201190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.201210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.211139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.211198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.211215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.211222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.211229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.211244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.656 [2024-11-20 10:06:17.221166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.656 [2024-11-20 10:06:17.221222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.656 [2024-11-20 10:06:17.221236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.656 [2024-11-20 10:06:17.221243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.656 [2024-11-20 10:06:17.221250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.656 [2024-11-20 10:06:17.221265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.656 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.231257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.231327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.231341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.231348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.231354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.231369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.241257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.241310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.241323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.241331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.241338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.241353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.251263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.251318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.251331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.251341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.251348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.251364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.261288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.261341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.261355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.261362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.261369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.261384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.271325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.271378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.271392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.271399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.271406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.271421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.281330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.281384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.281397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.281404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.281411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.281426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.291406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.291462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.291476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.291483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.291490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.916 [2024-11-20 10:06:17.291509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.916 qpair failed and we were unable to recover it. 00:27:43.916 [2024-11-20 10:06:17.301394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.916 [2024-11-20 10:06:17.301453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.916 [2024-11-20 10:06:17.301467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.916 [2024-11-20 10:06:17.301475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.916 [2024-11-20 10:06:17.301482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.301497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.311443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.311508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.311522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.311529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.311535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.311550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.321451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.321505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.321519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.321526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.321532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.321547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.331418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.331477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.331492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.331499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.331506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.331521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.341443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.341501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.341515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.341522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.341529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.341543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.351551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.351627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.351641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.351649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.351655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.351671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.361483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.361534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.361548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.361556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.361562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.361578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.371526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.371584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.371598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.371606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.371613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.371628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.381661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.381718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.381734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.381742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.381748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.381763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.391647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.391702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.391716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.391723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.391730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.391744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.401673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.401729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.401743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.401750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.401756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.401772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.411693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.411748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.411762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.411769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.411775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.411790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.421736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.421788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.421801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.421809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.421814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.917 [2024-11-20 10:06:17.421833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.917 qpair failed and we were unable to recover it. 00:27:43.917 [2024-11-20 10:06:17.431759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.917 [2024-11-20 10:06:17.431813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.917 [2024-11-20 10:06:17.431827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.917 [2024-11-20 10:06:17.431835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.917 [2024-11-20 10:06:17.431841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.431856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.441782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.441837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.441851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.441859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.441865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.441881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.451825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.451877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.451891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.451898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.451904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.451920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.461883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.461971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.461985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.461992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.461998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.462013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.471872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.471934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.471948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.471955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.471961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.471976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.481879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.481932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.481946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.481953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.481960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.481975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:43.918 [2024-11-20 10:06:17.491940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.918 [2024-11-20 10:06:17.491994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.918 [2024-11-20 10:06:17.492009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.918 [2024-11-20 10:06:17.492016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.918 [2024-11-20 10:06:17.492022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:43.918 [2024-11-20 10:06:17.492038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.918 qpair failed and we were unable to recover it. 00:27:44.178 [2024-11-20 10:06:17.501881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.178 [2024-11-20 10:06:17.501934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.178 [2024-11-20 10:06:17.501948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.178 [2024-11-20 10:06:17.501955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.178 [2024-11-20 10:06:17.501962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.178 [2024-11-20 10:06:17.501977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.178 qpair failed and we were unable to recover it. 00:27:44.178 [2024-11-20 10:06:17.512013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.178 [2024-11-20 10:06:17.512085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.178 [2024-11-20 10:06:17.512103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.178 [2024-11-20 10:06:17.512110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.178 [2024-11-20 10:06:17.512116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.178 [2024-11-20 10:06:17.512131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.178 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.522054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.522110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.522124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.522132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.522139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.522154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.532055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.532112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.532127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.532133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.532140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.532155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.542080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.542184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.542198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.542209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.542215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.542230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.552098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.552150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.552163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.552170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.552180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.552195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.562126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.562179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.562193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.562204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.562210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.562226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.572163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.572224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.572238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.572245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.572252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.572267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.582111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.582176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.582191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.582198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.582209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.582225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.592225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.592284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.592298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.592306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.592312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.592327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.602222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.602280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.602294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.602302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.602308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.602323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.612199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.612268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.612282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.612289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.612295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.612310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.622320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.622377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.622391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.622399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.622405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.622420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.632323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.632390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.632404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.632411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.632418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.179 [2024-11-20 10:06:17.632433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.179 qpair failed and we were unable to recover it. 00:27:44.179 [2024-11-20 10:06:17.642324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.179 [2024-11-20 10:06:17.642381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.179 [2024-11-20 10:06:17.642400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.179 [2024-11-20 10:06:17.642407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.179 [2024-11-20 10:06:17.642413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.642429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.652303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.652359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.652374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.652381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.652387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.652403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.662393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.662459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.662473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.662482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.662488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.662503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.672460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.672518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.672533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.672541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.672547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.672562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.682420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.682477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.682491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.682502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.682508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.682524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.692467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.692526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.692540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.692547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.692554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.692569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.702477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.702546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.702561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.702568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.702574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.702590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.712490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.712549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.712563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.712570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.712577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.712592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.722537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.722585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.722599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.722607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.722613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.722629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.732532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.732588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.732602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.732610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.732616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.732631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.742572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.742628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.742642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.742649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.742656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.742670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.180 [2024-11-20 10:06:17.752585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.180 [2024-11-20 10:06:17.752638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.180 [2024-11-20 10:06:17.752652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.180 [2024-11-20 10:06:17.752659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.180 [2024-11-20 10:06:17.752666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.180 [2024-11-20 10:06:17.752681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.180 qpair failed and we were unable to recover it. 00:27:44.439 [2024-11-20 10:06:17.762707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.439 [2024-11-20 10:06:17.762762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.439 [2024-11-20 10:06:17.762776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.439 [2024-11-20 10:06:17.762783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.439 [2024-11-20 10:06:17.762790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.439 [2024-11-20 10:06:17.762806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.439 qpair failed and we were unable to recover it. 00:27:44.439 [2024-11-20 10:06:17.772712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.772773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.772787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.772794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.772800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.772816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.782655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.782724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.782737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.782745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.782752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.782767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.792827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.792879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.792894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.792901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.792907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.792922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.802779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.802839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.802854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.802861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.802867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.802882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.812763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.812846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.812860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.812871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.812878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.812894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.822784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.822840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.822855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.822862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.822869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.822884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.832869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.832921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.832935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.832943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.832949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.832964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.842821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.842891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.842905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.842912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.842919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.842936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.852865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.852920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.852934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.852941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.852948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.852966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.862893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.862946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.862960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.862967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.862974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.862989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.873016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.873080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.873095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.873102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.873109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.873123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.883030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.883098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.883112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.883120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.883127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.883143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 qpair failed and we were unable to recover it. 00:27:44.440 [2024-11-20 10:06:17.892973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.440 [2024-11-20 10:06:17.893031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.440 [2024-11-20 10:06:17.893045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.440 [2024-11-20 10:06:17.893053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.440 [2024-11-20 10:06:17.893059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.440 [2024-11-20 10:06:17.893075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.903072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.903149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.903164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.903171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.903177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.903192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.913029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.913094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.913108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.913116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.913122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.913138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.923123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.923177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.923191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.923198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.923208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.923223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.933145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.933206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.933221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.933228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.933234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.933250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.943173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.943255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.943276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.943283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.943290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.943305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.953141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.953227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.953242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.953249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.953255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.953270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.963261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.963315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.963329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.963336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.963343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.963358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.973284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.973345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.973359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.973366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.973373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.973388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.983325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.983407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.983422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.983430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.983436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.983454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:17.993356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:17.993411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:17.993425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:17.993433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:17.993439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:17.993455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:18.003342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:18.003395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:18.003409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:18.003416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:18.003423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:18.003439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.441 [2024-11-20 10:06:18.013304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.441 [2024-11-20 10:06:18.013362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.441 [2024-11-20 10:06:18.013376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.441 [2024-11-20 10:06:18.013383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.441 [2024-11-20 10:06:18.013390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.441 [2024-11-20 10:06:18.013404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.023407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.023461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.023475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.023482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.023488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.023504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.033446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.033504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.033517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.033525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.033531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.033546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.043446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.043499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.043513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.043520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.043526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.043542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.053496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.053572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.053586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.053593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.053599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.053615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.063527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.063581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.063595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.063602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.063608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.063623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.073582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.073635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.073653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.073661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.073667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.073682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.083570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.083627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.083642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.083650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.083656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.083671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.093604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.093670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.093684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.093691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.093698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.093712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.103621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.103680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.103694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.103702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.701 [2024-11-20 10:06:18.103708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.701 [2024-11-20 10:06:18.103723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.701 qpair failed and we were unable to recover it. 00:27:44.701 [2024-11-20 10:06:18.113661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.701 [2024-11-20 10:06:18.113717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.701 [2024-11-20 10:06:18.113732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.701 [2024-11-20 10:06:18.113739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.113748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.113763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.123674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.123727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.123741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.123748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.123754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.123769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.133646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.133701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.133716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.133725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.133731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.133746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.143726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.143779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.143793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.143800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.143808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.143823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.153755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.153832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.153846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.153855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.153861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.153876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.163793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.163849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.163864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.163871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.163878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.163893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.173743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.173808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.173822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.173829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.173835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.173850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.183851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.183910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.183923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.183930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.183937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.183952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.193883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.193953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.193968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.193974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.193981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.193996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.203931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.204017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.204034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.204041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.204047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.204063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.213928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.213985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.213999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.214006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.214012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.214027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.223950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.224018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.224033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.224040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.224046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.224061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.233924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.233986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.234001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.234008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.702 [2024-11-20 10:06:18.234014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.702 [2024-11-20 10:06:18.234029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.702 qpair failed and we were unable to recover it. 00:27:44.702 [2024-11-20 10:06:18.244016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.702 [2024-11-20 10:06:18.244065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.702 [2024-11-20 10:06:18.244078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.702 [2024-11-20 10:06:18.244089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.703 [2024-11-20 10:06:18.244095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.703 [2024-11-20 10:06:18.244110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-11-20 10:06:18.253978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.703 [2024-11-20 10:06:18.254034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.703 [2024-11-20 10:06:18.254048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.703 [2024-11-20 10:06:18.254055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.703 [2024-11-20 10:06:18.254061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.703 [2024-11-20 10:06:18.254076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-11-20 10:06:18.264081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.703 [2024-11-20 10:06:18.264138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.703 [2024-11-20 10:06:18.264152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.703 [2024-11-20 10:06:18.264160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.703 [2024-11-20 10:06:18.264166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.703 [2024-11-20 10:06:18.264181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.703 [2024-11-20 10:06:18.274123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.703 [2024-11-20 10:06:18.274174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.703 [2024-11-20 10:06:18.274188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.703 [2024-11-20 10:06:18.274195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.703 [2024-11-20 10:06:18.274205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.703 [2024-11-20 10:06:18.274220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.703 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.284158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.284216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.284231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.284239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.284246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.284260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.294083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.294137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.294151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.294158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.294165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.294180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.304189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.304246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.304260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.304267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.304273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.304289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.314229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.314283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.314297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.314304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.314311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.314326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.324233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.324289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.324302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.324309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.324316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.324331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.334279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.334338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.334352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.334359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.334365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.334381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.344303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.344357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.344371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.344378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.344385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.344400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.354334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.354383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.354397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.354405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.354411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.354427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.364366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.364421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.364436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.963 [2024-11-20 10:06:18.364443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.963 [2024-11-20 10:06:18.364449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.963 [2024-11-20 10:06:18.364464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.963 qpair failed and we were unable to recover it. 00:27:44.963 [2024-11-20 10:06:18.374402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.963 [2024-11-20 10:06:18.374458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.963 [2024-11-20 10:06:18.374472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.374482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.374488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.374504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.384364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.384433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.384446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.384454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.384461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.384476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.394443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.394501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.394515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.394522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.394529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.394544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.404521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.404574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.404588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.404595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.404602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.404617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.414504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.414560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.414574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.414581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.414588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.414608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.424536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.424599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.424613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.424621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.424627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.424643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.434552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.434609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.434622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.434630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.434636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.434652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.444565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.444620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.444634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.444642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.444648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.444664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.454652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.454708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.454721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.454728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.454734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.454749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.464665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.464722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.464736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.464744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.964 [2024-11-20 10:06:18.464750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.964 [2024-11-20 10:06:18.464765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.964 qpair failed and we were unable to recover it. 00:27:44.964 [2024-11-20 10:06:18.474657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.964 [2024-11-20 10:06:18.474708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.964 [2024-11-20 10:06:18.474722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.964 [2024-11-20 10:06:18.474729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.474736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.474751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.484695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.484762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.484776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.484783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.484789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.484805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.494722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.494792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.494806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.494813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.494819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.494835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.504705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.504762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.504779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.504787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.504793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.504808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.514785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.514835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.514850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.514857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.514863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.514878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.524811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.524863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.524877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.524884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.524891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.524906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:44.965 [2024-11-20 10:06:18.534836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.965 [2024-11-20 10:06:18.534894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.965 [2024-11-20 10:06:18.534908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.965 [2024-11-20 10:06:18.534915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.965 [2024-11-20 10:06:18.534922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:44.965 [2024-11-20 10:06:18.534937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.965 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.544859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.544913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.544927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.544934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.544943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.544959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.554898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.554954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.554968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.554975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.554981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.554997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.564916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.564972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.564986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.564994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.565001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.565016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.574951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.575058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.575074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.575081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.575088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.575103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.585003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.585057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.585071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.585079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.585085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.585100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.595004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.595057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.223 [2024-11-20 10:06:18.595072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.223 [2024-11-20 10:06:18.595079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.223 [2024-11-20 10:06:18.595085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.223 [2024-11-20 10:06:18.595101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.223 qpair failed and we were unable to recover it. 00:27:45.223 [2024-11-20 10:06:18.605024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.223 [2024-11-20 10:06:18.605101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.605117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.605125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.605131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.605147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.615104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.615163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.615177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.615184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.615190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.615208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.625096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.625153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.625168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.625175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.625181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.625196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.635123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.635176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.635193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.635205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.635211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.635227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.645137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.645224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.645239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.645246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.645252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.645267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.655152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.655213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.655228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.655236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.655243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.655258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.665223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.665283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.665297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.665305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.665311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.665326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.675239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.675305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.675319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.675326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.675335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.675351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.685275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.685357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.685371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.685378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.685385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.685399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.695297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.695355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.695369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.695376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.695383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.695398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.705319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.705381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.705396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.705404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.705410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.705425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.715391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.715473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.715489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.715499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.715506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.715521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.725320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.725403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.725418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.725425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.725431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.725446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.735345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.735401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.735415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.735422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.735429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.735444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.745425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.745482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.745497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.745505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.745511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.745526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.755493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.755551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.755565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.755572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.755579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.755594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.765481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.765536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.765554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.765561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.765567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.765582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.775521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.775591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.775604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.775611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.775617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.775633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.785602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.785659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.785673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.785680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.785687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.785702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.224 [2024-11-20 10:06:18.795592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.224 [2024-11-20 10:06:18.795673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.224 [2024-11-20 10:06:18.795688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.224 [2024-11-20 10:06:18.795695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.224 [2024-11-20 10:06:18.795701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.224 [2024-11-20 10:06:18.795716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.224 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.805586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.805644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.805658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.805669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.805675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.805690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.815636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.815692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.815706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.815713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.815719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.815734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.825624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.825683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.825697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.825705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.825711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.825726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.835706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.835762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.835776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.835784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.835791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.835806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.845709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.845767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.845781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.845788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.845795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.845810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.855749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.855806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.855820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.855827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.855834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.855849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.865798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.865852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.865866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.865873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.865880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.865895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.875858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.875912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.875925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.875932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.875939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.875954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.885777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.885834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.885849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.885856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.885861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.885877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.895862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.895935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.483 [2024-11-20 10:06:18.895950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.483 [2024-11-20 10:06:18.895957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.483 [2024-11-20 10:06:18.895963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.483 [2024-11-20 10:06:18.895978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.483 qpair failed and we were unable to recover it. 00:27:45.483 [2024-11-20 10:06:18.905895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.483 [2024-11-20 10:06:18.905948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.905962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.905970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.905977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.905992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.915909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.915962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.915976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.915983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.915989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.916004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.925936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.925989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.926003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.926010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.926017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.926032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.935968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.936027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.936041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.936052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.936058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.936074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.946040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.946103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.946117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.946125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.946131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.946145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.956035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.956090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.956104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.956111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.956118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.956133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.966054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.966113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.966127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.966135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.966141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.966156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.976081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.976160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.976174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.976181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.976188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.976210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.986155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.986221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.986236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.986244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.986250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.986266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:18.996143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:18.996208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:18.996222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:18.996230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:18.996236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:18.996251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:19.006196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:19.006305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:19.006318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:19.006325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:19.006332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:19.006347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:19.016215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:19.016279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:19.016293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:19.016300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:19.016306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:19.016321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:19.026224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:19.026299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.484 [2024-11-20 10:06:19.026314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.484 [2024-11-20 10:06:19.026321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.484 [2024-11-20 10:06:19.026327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.484 [2024-11-20 10:06:19.026343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.484 qpair failed and we were unable to recover it. 00:27:45.484 [2024-11-20 10:06:19.036251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.484 [2024-11-20 10:06:19.036325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.485 [2024-11-20 10:06:19.036342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.485 [2024-11-20 10:06:19.036350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.485 [2024-11-20 10:06:19.036359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.485 [2024-11-20 10:06:19.036377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-20 10:06:19.046285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.485 [2024-11-20 10:06:19.046343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.485 [2024-11-20 10:06:19.046357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.485 [2024-11-20 10:06:19.046364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.485 [2024-11-20 10:06:19.046370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.485 [2024-11-20 10:06:19.046385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.485 [2024-11-20 10:06:19.056318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.485 [2024-11-20 10:06:19.056383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.485 [2024-11-20 10:06:19.056397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.485 [2024-11-20 10:06:19.056405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.485 [2024-11-20 10:06:19.056412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.485 [2024-11-20 10:06:19.056427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.485 qpair failed and we were unable to recover it. 00:27:45.743 [2024-11-20 10:06:19.066352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.743 [2024-11-20 10:06:19.066407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.743 [2024-11-20 10:06:19.066425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.743 [2024-11-20 10:06:19.066432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.743 [2024-11-20 10:06:19.066439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.743 [2024-11-20 10:06:19.066455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.743 qpair failed and we were unable to recover it. 00:27:45.743 [2024-11-20 10:06:19.076356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.743 [2024-11-20 10:06:19.076433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.743 [2024-11-20 10:06:19.076448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.743 [2024-11-20 10:06:19.076454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.743 [2024-11-20 10:06:19.076461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.743 [2024-11-20 10:06:19.076477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.743 qpair failed and we were unable to recover it. 00:27:45.743 [2024-11-20 10:06:19.086343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.743 [2024-11-20 10:06:19.086396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.743 [2024-11-20 10:06:19.086411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.743 [2024-11-20 10:06:19.086419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.743 [2024-11-20 10:06:19.086426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.743 [2024-11-20 10:06:19.086441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.743 qpair failed and we were unable to recover it. 00:27:45.743 [2024-11-20 10:06:19.096385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.743 [2024-11-20 10:06:19.096444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.743 [2024-11-20 10:06:19.096457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.743 [2024-11-20 10:06:19.096464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.743 [2024-11-20 10:06:19.096471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2bc000b90 00:27:45.743 [2024-11-20 10:06:19.096486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.106493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.106589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.106646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.106672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.106716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2b0000b90 00:27:45.744 [2024-11-20 10:06:19.106768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.116509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.116596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.116625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.116639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.116652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2b0000b90 00:27:45.744 [2024-11-20 10:06:19.116683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.126513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.126613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.126632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.126642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.126651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2b0000b90 00:27:45.744 [2024-11-20 10:06:19.126672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.136603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.136750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.136805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.136832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.136854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2b4000b90 00:27:45.744 [2024-11-20 10:06:19.136904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.146532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.146622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.146651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.146667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.146680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa2b4000b90 00:27:45.744 [2024-11-20 10:06:19.146712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.146892] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:45.744 A controller has encountered a failure and is being reset. 00:27:45.744 [2024-11-20 10:06:19.156614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.156710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.156770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.156796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.156818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf06ba0 00:27:45.744 [2024-11-20 10:06:19.156868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 [2024-11-20 10:06:19.166625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.744 [2024-11-20 10:06:19.166703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.744 [2024-11-20 10:06:19.166732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.744 [2024-11-20 10:06:19.166748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.744 [2024-11-20 10:06:19.166761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf06ba0 00:27:45.744 [2024-11-20 10:06:19.166791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:45.744 qpair failed and we were unable to recover it. 00:27:45.744 Controller properly reset. 00:27:45.744 Initializing NVMe Controllers 00:27:45.744 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:45.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:45.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:45.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:45.744 Initialization complete. Launching workers. 00:27:45.744 Starting thread on core 1 00:27:45.744 Starting thread on core 2 00:27:45.744 Starting thread on core 3 00:27:45.744 Starting thread on core 0 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:46.002 00:27:46.002 real 0m10.865s 00:27:46.002 user 0m19.270s 00:27:46.002 sys 0m4.583s 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.002 ************************************ 00:27:46.002 END TEST nvmf_target_disconnect_tc2 00:27:46.002 ************************************ 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:46.002 rmmod nvme_tcp 00:27:46.002 rmmod nvme_fabrics 00:27:46.002 rmmod nvme_keyring 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2816271 ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2816271 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2816271 ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2816271 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816271 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816271' 00:27:46.002 killing process with pid 2816271 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2816271 00:27:46.002 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2816271 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.260 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.261 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.261 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.261 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.261 10:06:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.166 10:06:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.166 00:27:48.166 real 0m19.638s 00:27:48.166 user 0m47.315s 00:27:48.166 sys 0m9.501s 00:27:48.166 10:06:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.166 10:06:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:48.166 ************************************ 00:27:48.166 END TEST nvmf_target_disconnect 00:27:48.166 ************************************ 00:27:48.425 10:06:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:48.425 00:27:48.425 real 5m55.354s 00:27:48.425 user 10m39.293s 00:27:48.425 sys 1m58.577s 00:27:48.425 10:06:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.425 10:06:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.425 ************************************ 00:27:48.425 END TEST nvmf_host 00:27:48.425 ************************************ 00:27:48.425 10:06:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:48.425 10:06:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:48.425 10:06:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:48.425 10:06:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.425 10:06:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.425 10:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.425 ************************************ 00:27:48.425 START TEST nvmf_target_core_interrupt_mode 00:27:48.425 ************************************ 00:27:48.425 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:48.425 * Looking for test storage... 00:27:48.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:48.425 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.425 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.425 10:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.425 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.425 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.686 --rc genhtml_branch_coverage=1 00:27:48.686 --rc genhtml_function_coverage=1 00:27:48.686 --rc genhtml_legend=1 00:27:48.686 --rc geninfo_all_blocks=1 00:27:48.686 --rc geninfo_unexecuted_blocks=1 00:27:48.686 00:27:48.686 ' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.686 --rc genhtml_branch_coverage=1 00:27:48.686 --rc genhtml_function_coverage=1 00:27:48.686 --rc genhtml_legend=1 00:27:48.686 --rc geninfo_all_blocks=1 00:27:48.686 --rc geninfo_unexecuted_blocks=1 00:27:48.686 00:27:48.686 ' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.686 --rc genhtml_branch_coverage=1 00:27:48.686 --rc genhtml_function_coverage=1 00:27:48.686 --rc genhtml_legend=1 00:27:48.686 --rc geninfo_all_blocks=1 00:27:48.686 --rc geninfo_unexecuted_blocks=1 00:27:48.686 00:27:48.686 ' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.686 --rc genhtml_branch_coverage=1 00:27:48.686 --rc genhtml_function_coverage=1 00:27:48.686 --rc genhtml_legend=1 00:27:48.686 --rc geninfo_all_blocks=1 00:27:48.686 --rc geninfo_unexecuted_blocks=1 00:27:48.686 00:27:48.686 ' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.686 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:48.687 ************************************ 00:27:48.687 START TEST nvmf_abort 00:27:48.687 ************************************ 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:48.687 * Looking for test storage... 00:27:48.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.687 --rc genhtml_branch_coverage=1 00:27:48.687 --rc genhtml_function_coverage=1 00:27:48.687 --rc genhtml_legend=1 00:27:48.687 --rc geninfo_all_blocks=1 00:27:48.687 --rc geninfo_unexecuted_blocks=1 00:27:48.687 00:27:48.687 ' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.687 --rc genhtml_branch_coverage=1 00:27:48.687 --rc genhtml_function_coverage=1 00:27:48.687 --rc genhtml_legend=1 00:27:48.687 --rc geninfo_all_blocks=1 00:27:48.687 --rc geninfo_unexecuted_blocks=1 00:27:48.687 00:27:48.687 ' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.687 --rc genhtml_branch_coverage=1 00:27:48.687 --rc genhtml_function_coverage=1 00:27:48.687 --rc genhtml_legend=1 00:27:48.687 --rc geninfo_all_blocks=1 00:27:48.687 --rc geninfo_unexecuted_blocks=1 00:27:48.687 00:27:48.687 ' 00:27:48.687 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.687 --rc genhtml_branch_coverage=1 00:27:48.687 --rc genhtml_function_coverage=1 00:27:48.687 --rc genhtml_legend=1 00:27:48.687 --rc geninfo_all_blocks=1 00:27:48.687 --rc geninfo_unexecuted_blocks=1 00:27:48.687 00:27:48.687 ' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.947 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.948 10:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:55.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:55.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.520 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:55.521 Found net devices under 0000:86:00.0: cvl_0_0 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:55.521 Found net devices under 0000:86:00.1: cvl_0_1 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.521 10:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:55.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:55.521 00:27:55.521 --- 10.0.0.2 ping statistics --- 00:27:55.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.521 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:27:55.521 00:27:55.521 --- 10.0.0.1 ping statistics --- 00:27:55.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.521 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2821020 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2821020 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2821020 ']' 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.521 [2024-11-20 10:06:28.323825] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:55.521 [2024-11-20 10:06:28.324721] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:27:55.521 [2024-11-20 10:06:28.324754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.521 [2024-11-20 10:06:28.403879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:55.521 [2024-11-20 10:06:28.444815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.521 [2024-11-20 10:06:28.444850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.521 [2024-11-20 10:06:28.444857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.521 [2024-11-20 10:06:28.444863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.521 [2024-11-20 10:06:28.444868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.521 [2024-11-20 10:06:28.446178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.521 [2024-11-20 10:06:28.446287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.521 [2024-11-20 10:06:28.446288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.521 [2024-11-20 10:06:28.511720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:55.521 [2024-11-20 10:06:28.512490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:55.521 [2024-11-20 10:06:28.512751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:55.521 [2024-11-20 10:06:28.512896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.521 [2024-11-20 10:06:28.579108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.521 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 Malloc0 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 Delay0 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 [2024-11-20 10:06:28.663093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.522 10:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:55.522 [2024-11-20 10:06:28.792905] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:57.424 Initializing NVMe Controllers 00:27:57.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:57.424 controller IO queue size 128 less than required 00:27:57.424 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:57.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:57.424 Initialization complete. Launching workers. 00:27:57.424 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37912 00:27:57.424 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37973, failed to submit 66 00:27:57.424 success 37912, unsuccessful 61, failed 0 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.424 10:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.424 rmmod nvme_tcp 00:27:57.424 rmmod nvme_fabrics 00:27:57.683 rmmod nvme_keyring 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2821020 ']' 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2821020 ']' 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821020' 00:27:57.683 killing process with pid 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2821020 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.683 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:57.942 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.943 10:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.848 00:27:59.848 real 0m11.248s 00:27:59.848 user 0m10.589s 00:27:59.848 sys 0m5.787s 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:59.848 ************************************ 00:27:59.848 END TEST nvmf_abort 00:27:59.848 ************************************ 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:59.848 ************************************ 00:27:59.848 START TEST nvmf_ns_hotplug_stress 00:27:59.848 ************************************ 00:27:59.848 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:00.108 * Looking for test storage... 00:28:00.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.108 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.109 --rc genhtml_branch_coverage=1 00:28:00.109 --rc genhtml_function_coverage=1 00:28:00.109 --rc genhtml_legend=1 00:28:00.109 --rc geninfo_all_blocks=1 00:28:00.109 --rc geninfo_unexecuted_blocks=1 00:28:00.109 00:28:00.109 ' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.109 --rc genhtml_branch_coverage=1 00:28:00.109 --rc genhtml_function_coverage=1 00:28:00.109 --rc genhtml_legend=1 00:28:00.109 --rc geninfo_all_blocks=1 00:28:00.109 --rc geninfo_unexecuted_blocks=1 00:28:00.109 00:28:00.109 ' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.109 --rc genhtml_branch_coverage=1 00:28:00.109 --rc genhtml_function_coverage=1 00:28:00.109 --rc genhtml_legend=1 00:28:00.109 --rc geninfo_all_blocks=1 00:28:00.109 --rc geninfo_unexecuted_blocks=1 00:28:00.109 00:28:00.109 ' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.109 --rc genhtml_branch_coverage=1 00:28:00.109 --rc genhtml_function_coverage=1 00:28:00.109 --rc genhtml_legend=1 00:28:00.109 --rc geninfo_all_blocks=1 00:28:00.109 --rc geninfo_unexecuted_blocks=1 00:28:00.109 00:28:00.109 ' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.109 10:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.680 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:06.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:06.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:06.681 Found net devices under 0000:86:00.0: cvl_0_0 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:06.681 Found net devices under 0000:86:00.1: cvl_0_1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:28:06.681 00:28:06.681 --- 10.0.0.2 ping statistics --- 00:28:06.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.681 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:28:06.681 00:28:06.681 --- 10.0.0.1 ping statistics --- 00:28:06.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.681 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:06.681 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2824836 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2824836 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2824836 ']' 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:06.682 [2024-11-20 10:06:39.617385] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:06.682 [2024-11-20 10:06:39.618279] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:28:06.682 [2024-11-20 10:06:39.618311] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.682 [2024-11-20 10:06:39.682333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.682 [2024-11-20 10:06:39.724036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.682 [2024-11-20 10:06:39.724071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.682 [2024-11-20 10:06:39.724078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.682 [2024-11-20 10:06:39.724084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.682 [2024-11-20 10:06:39.724090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.682 [2024-11-20 10:06:39.725421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.682 [2024-11-20 10:06:39.727217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.682 [2024-11-20 10:06:39.727220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.682 [2024-11-20 10:06:39.792986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:06.682 [2024-11-20 10:06:39.793700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:06.682 [2024-11-20 10:06:39.793731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:06.682 [2024-11-20 10:06:39.793845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:06.682 10:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:06.682 [2024-11-20 10:06:40.039920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.682 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:06.940 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.940 [2024-11-20 10:06:40.448321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.940 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:07.253 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:07.512 Malloc0 00:28:07.512 10:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:07.512 Delay0 00:28:07.512 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.771 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:08.028 NULL1 00:28:08.028 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:08.286 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2825280 00:28:08.286 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:08.286 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:08.286 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.286 10:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.543 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:08.543 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:08.801 true 00:28:08.801 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:08.801 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.059 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.059 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:09.059 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:09.318 true 00:28:09.318 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:09.318 10:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.694 Read completed with error (sct=0, sc=11) 00:28:10.694 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.694 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:10.694 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:10.953 true 00:28:10.953 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:10.953 10:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.889 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.147 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:12.147 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:12.147 true 00:28:12.147 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:12.147 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.406 10:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.665 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:12.665 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:12.923 true 00:28:12.923 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:12.923 10:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.871 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.157 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:14.157 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:14.441 true 00:28:14.441 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:14.441 10:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.040 10:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.298 10:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:15.298 10:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:15.557 true 00:28:15.557 10:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:15.557 10:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.814 10:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.814 10:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:15.814 10:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:16.071 true 00:28:16.071 10:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:16.071 10:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 10:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.448 10:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:17.448 10:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:17.707 true 00:28:17.707 10:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:17.707 10:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.643 10:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.643 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:18.643 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:18.901 true 00:28:18.902 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:18.902 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.160 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.419 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:19.419 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:19.419 true 00:28:19.419 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:19.419 10:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 10:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.796 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:20.796 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:21.055 true 00:28:21.055 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:21.055 10:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:21.991 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.991 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:21.991 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:22.249 true 00:28:22.249 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:22.249 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.508 10:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.508 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:22.508 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:22.766 true 00:28:22.766 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:22.766 10:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.702 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.961 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:23.961 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:24.220 true 00:28:24.220 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:24.220 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.478 10:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.478 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:24.478 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:24.737 true 00:28:24.737 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:24.737 10:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.113 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.113 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:26.113 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:26.371 true 00:28:26.371 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:26.371 10:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.630 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.888 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:26.888 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:26.888 true 00:28:26.888 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:26.888 10:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 10:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.264 10:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:28.264 10:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:28.523 true 00:28:28.523 10:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:28.523 10:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.459 10:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.459 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:29.460 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:29.753 true 00:28:29.753 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:29.753 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.011 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.269 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:30.269 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:30.269 true 00:28:30.269 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:30.269 10:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 10:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.647 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:31.647 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:31.906 true 00:28:31.906 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:31.906 10:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.842 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.842 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:32.842 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:33.101 true 00:28:33.101 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:33.101 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.359 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.618 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:33.618 10:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:33.618 true 00:28:33.618 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:33.618 10:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.994 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:34.994 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:35.253 true 00:28:35.253 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:35.253 10:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.192 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.192 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:36.192 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:36.452 true 00:28:36.452 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:36.452 10:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.711 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.711 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:36.711 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:36.969 true 00:28:36.969 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:36.969 10:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.905 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.163 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:38.163 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:38.421 true 00:28:38.421 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:38.421 10:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.353 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.353 Initializing NVMe Controllers 00:28:39.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.353 Controller IO queue size 128, less than required. 00:28:39.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.353 Controller IO queue size 128, less than required. 00:28:39.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:39.353 Initialization complete. Launching workers. 00:28:39.353 ======================================================== 00:28:39.353 Latency(us) 00:28:39.353 Device Information : IOPS MiB/s Average min max 00:28:39.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2028.03 0.99 43474.55 2162.86 1012444.68 00:28:39.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17682.95 8.63 7216.22 1592.12 372147.43 00:28:39.353 ======================================================== 00:28:39.353 Total : 19710.98 9.62 10946.78 1592.12 1012444.68 00:28:39.353 00:28:39.353 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:39.353 10:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:39.611 true 00:28:39.611 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2825280 00:28:39.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2825280) - No such process 00:28:39.611 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2825280 00:28:39.611 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.870 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:40.128 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:40.128 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:40.128 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:40.128 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.128 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:40.128 null0 00:28:40.129 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:40.129 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.129 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:40.387 null1 00:28:40.387 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:40.387 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.387 10:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:40.646 null2 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:40.646 null3 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.646 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:40.904 null4 00:28:40.904 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:40.904 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:40.904 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:41.163 null5 00:28:41.163 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:41.163 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:41.163 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:41.163 null6 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:41.422 null7 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:41.422 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2830624 2830626 2830627 2830629 2830631 2830633 2830636 2830638 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.423 10:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:41.682 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:41.940 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.199 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:42.458 10:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:42.717 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:42.976 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:43.236 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:43.237 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:43.497 10:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:43.497 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:43.756 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:44.023 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.024 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.289 10:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:44.548 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:44.548 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:44.548 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:44.549 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:44.549 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.549 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:44.549 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:44.549 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.807 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:44.808 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.067 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:45.327 10:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.585 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.586 rmmod nvme_tcp 00:28:45.586 rmmod nvme_fabrics 00:28:45.586 rmmod nvme_keyring 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2824836 ']' 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2824836 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2824836 ']' 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2824836 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.586 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824836 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824836' 00:28:45.845 killing process with pid 2824836 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2824836 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2824836 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.845 10:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.383 00:28:48.383 real 0m48.022s 00:28:48.383 user 3m0.130s 00:28:48.383 sys 0m20.376s 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:48.383 ************************************ 00:28:48.383 END TEST nvmf_ns_hotplug_stress 00:28:48.383 ************************************ 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:48.383 ************************************ 00:28:48.383 START TEST nvmf_delete_subsystem 00:28:48.383 ************************************ 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:48.383 * Looking for test storage... 00:28:48.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.383 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.384 --rc genhtml_branch_coverage=1 00:28:48.384 --rc genhtml_function_coverage=1 00:28:48.384 --rc genhtml_legend=1 00:28:48.384 --rc geninfo_all_blocks=1 00:28:48.384 --rc geninfo_unexecuted_blocks=1 00:28:48.384 00:28:48.384 ' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.384 --rc genhtml_branch_coverage=1 00:28:48.384 --rc genhtml_function_coverage=1 00:28:48.384 --rc genhtml_legend=1 00:28:48.384 --rc geninfo_all_blocks=1 00:28:48.384 --rc geninfo_unexecuted_blocks=1 00:28:48.384 00:28:48.384 ' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.384 --rc genhtml_branch_coverage=1 00:28:48.384 --rc genhtml_function_coverage=1 00:28:48.384 --rc genhtml_legend=1 00:28:48.384 --rc geninfo_all_blocks=1 00:28:48.384 --rc geninfo_unexecuted_blocks=1 00:28:48.384 00:28:48.384 ' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.384 --rc genhtml_branch_coverage=1 00:28:48.384 --rc genhtml_function_coverage=1 00:28:48.384 --rc genhtml_legend=1 00:28:48.384 --rc geninfo_all_blocks=1 00:28:48.384 --rc geninfo_unexecuted_blocks=1 00:28:48.384 00:28:48.384 ' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.384 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.385 10:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.742 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.743 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:54.003 Found net devices under 0000:86:00.0: cvl_0_0 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:54.003 Found net devices under 0000:86:00.1: cvl_0_1 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.003 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:28:54.004 00:28:54.004 --- 10.0.0.2 ping statistics --- 00:28:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.004 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:28:54.004 00:28:54.004 --- 10.0.0.1 ping statistics --- 00:28:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.004 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.004 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2835003 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2835003 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2835003 ']' 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.264 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.264 [2024-11-20 10:07:27.657014] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:54.264 [2024-11-20 10:07:27.657891] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:28:54.264 [2024-11-20 10:07:27.657924] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.264 [2024-11-20 10:07:27.734274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:54.264 [2024-11-20 10:07:27.777135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.264 [2024-11-20 10:07:27.777172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.264 [2024-11-20 10:07:27.777179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.264 [2024-11-20 10:07:27.777185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.264 [2024-11-20 10:07:27.777190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.264 [2024-11-20 10:07:27.778384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.264 [2024-11-20 10:07:27.778385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.523 [2024-11-20 10:07:27.846189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:54.523 [2024-11-20 10:07:27.846783] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:54.523 [2024-11-20 10:07:27.846975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.523 [2024-11-20 10:07:27.923116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.523 [2024-11-20 10:07:27.947378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.523 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.524 NULL1 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.524 Delay0 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2835023 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:54.524 10:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:54.524 [2024-11-20 10:07:28.052155] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:56.428 10:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.428 10:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.428 10:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 [2024-11-20 10:07:30.173038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f0800d4b0 is same with the state(6) to be set 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 starting I/O failed: -6 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 [2024-11-20 10:07:30.174063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c42c0 is same with the state(6) to be set 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Write completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.688 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Write completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Write completed with error (sct=0, sc=8) 00:28:56.689 Write completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Write completed with error (sct=0, sc=8) 00:28:56.689 Read completed with error (sct=0, sc=8) 00:28:56.689 Write completed with error (sct=0, sc=8) 00:28:57.625 [2024-11-20 10:07:31.147461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c59a0 is same with the state(6) to be set 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 [2024-11-20 10:07:31.177327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f08000c40 is same with the state(6) to be set 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 [2024-11-20 10:07:31.177604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f0800d020 is same with the state(6) to be set 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Write completed with error (sct=0, sc=8) 00:28:57.625 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 [2024-11-20 10:07:31.177746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f0800d7e0 is same with the state(6) to be set 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Write completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 Read completed with error (sct=0, sc=8) 00:28:57.626 [2024-11-20 10:07:31.178418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c4680 is same with the state(6) to be set 00:28:57.626 Initializing NVMe Controllers 00:28:57.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.626 Controller IO queue size 128, less than required. 00:28:57.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:57.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:57.626 Initialization complete. Launching workers. 00:28:57.626 ======================================================== 00:28:57.626 Latency(us) 00:28:57.626 Device Information : IOPS MiB/s Average min max 00:28:57.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.87 0.08 866961.10 240.06 1010753.83 00:28:57.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 188.66 0.09 949367.59 468.81 1012055.58 00:28:57.626 ======================================================== 00:28:57.626 Total : 346.53 0.17 911824.23 240.06 1012055.58 00:28:57.626 00:28:57.626 [2024-11-20 10:07:31.179290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c59a0 (9): Bad file descriptor 00:28:57.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:57.626 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.626 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:57.626 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2835023 00:28:57.626 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:58.195 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2835023 00:28:58.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2835023) - No such process 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2835023 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2835023 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2835023 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:58.196 [2024-11-20 10:07:31.711497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2835711 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:28:58.196 10:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:58.455 [2024-11-20 10:07:31.802170] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:58.714 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:58.714 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:28:58.714 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:59.282 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:59.282 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:28:59.282 10:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:59.850 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:59.850 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:28:59.850 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:00.417 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:00.417 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:29:00.417 10:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:00.676 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:00.676 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:29:00.676 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:01.243 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:01.243 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:29:01.243 10:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:01.501 Initializing NVMe Controllers 00:29:01.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.501 Controller IO queue size 128, less than required. 00:29:01.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:01.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:01.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:01.501 Initialization complete. Launching workers. 00:29:01.501 ======================================================== 00:29:01.501 Latency(us) 00:29:01.501 Device Information : IOPS MiB/s Average min max 00:29:01.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002226.41 1000153.34 1005963.60 00:29:01.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004415.29 1000334.83 1040964.93 00:29:01.501 ======================================================== 00:29:01.501 Total : 256.00 0.12 1003320.85 1000153.34 1040964.93 00:29:01.501 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2835711 00:29:01.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2835711) - No such process 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2835711 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.760 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.761 rmmod nvme_tcp 00:29:01.761 rmmod nvme_fabrics 00:29:01.761 rmmod nvme_keyring 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2835003 ']' 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2835003 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2835003 ']' 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2835003 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:01.761 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835003 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835003' 00:29:02.020 killing process with pid 2835003 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2835003 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2835003 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.020 10:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.558 00:29:04.558 real 0m16.119s 00:29:04.558 user 0m26.105s 00:29:04.558 sys 0m6.059s 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:04.558 ************************************ 00:29:04.558 END TEST nvmf_delete_subsystem 00:29:04.558 ************************************ 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:04.558 ************************************ 00:29:04.558 START TEST nvmf_host_management 00:29:04.558 ************************************ 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:04.558 * Looking for test storage... 00:29:04.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.558 --rc genhtml_branch_coverage=1 00:29:04.558 --rc genhtml_function_coverage=1 00:29:04.558 --rc genhtml_legend=1 00:29:04.558 --rc geninfo_all_blocks=1 00:29:04.558 --rc geninfo_unexecuted_blocks=1 00:29:04.558 00:29:04.558 ' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.558 --rc genhtml_branch_coverage=1 00:29:04.558 --rc genhtml_function_coverage=1 00:29:04.558 --rc genhtml_legend=1 00:29:04.558 --rc geninfo_all_blocks=1 00:29:04.558 --rc geninfo_unexecuted_blocks=1 00:29:04.558 00:29:04.558 ' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.558 --rc genhtml_branch_coverage=1 00:29:04.558 --rc genhtml_function_coverage=1 00:29:04.558 --rc genhtml_legend=1 00:29:04.558 --rc geninfo_all_blocks=1 00:29:04.558 --rc geninfo_unexecuted_blocks=1 00:29:04.558 00:29:04.558 ' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.558 --rc genhtml_branch_coverage=1 00:29:04.558 --rc genhtml_function_coverage=1 00:29:04.558 --rc genhtml_legend=1 00:29:04.558 --rc geninfo_all_blocks=1 00:29:04.558 --rc geninfo_unexecuted_blocks=1 00:29:04.558 00:29:04.558 ' 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.558 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.559 10:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.130 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:11.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:11.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:11.131 Found net devices under 0000:86:00.0: cvl_0_0 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:11.131 Found net devices under 0000:86:00.1: cvl_0_1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:29:11.131 00:29:11.131 --- 10.0.0.2 ping statistics --- 00:29:11.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.131 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:29:11.131 00:29:11.131 --- 10.0.0.1 ping statistics --- 00:29:11.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.131 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2839698 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2839698 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2839698 ']' 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.131 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.132 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.132 10:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 [2024-11-20 10:07:43.851521] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:11.132 [2024-11-20 10:07:43.852494] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:11.132 [2024-11-20 10:07:43.852530] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.132 [2024-11-20 10:07:43.930433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.132 [2024-11-20 10:07:43.973690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.132 [2024-11-20 10:07:43.973728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.132 [2024-11-20 10:07:43.973735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.132 [2024-11-20 10:07:43.973742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.132 [2024-11-20 10:07:43.973747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.132 [2024-11-20 10:07:43.975290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.132 [2024-11-20 10:07:43.975399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.132 [2024-11-20 10:07:43.975507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.132 [2024-11-20 10:07:43.975508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.132 [2024-11-20 10:07:44.042762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:11.132 [2024-11-20 10:07:44.043843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:11.132 [2024-11-20 10:07:44.043859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:11.132 [2024-11-20 10:07:44.044170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:11.132 [2024-11-20 10:07:44.044240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 [2024-11-20 10:07:44.108167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 Malloc0 00:29:11.132 [2024-11-20 10:07:44.196271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2839750 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2839750 /var/tmp/bdevperf.sock 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2839750 ']' 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:11.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.132 { 00:29:11.132 "params": { 00:29:11.132 "name": "Nvme$subsystem", 00:29:11.132 "trtype": "$TEST_TRANSPORT", 00:29:11.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.132 "adrfam": "ipv4", 00:29:11.132 "trsvcid": "$NVMF_PORT", 00:29:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.132 "hdgst": ${hdgst:-false}, 00:29:11.132 "ddgst": ${ddgst:-false} 00:29:11.132 }, 00:29:11.132 "method": "bdev_nvme_attach_controller" 00:29:11.132 } 00:29:11.132 EOF 00:29:11.132 )") 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:11.132 10:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.132 "params": { 00:29:11.132 "name": "Nvme0", 00:29:11.132 "trtype": "tcp", 00:29:11.132 "traddr": "10.0.0.2", 00:29:11.132 "adrfam": "ipv4", 00:29:11.132 "trsvcid": "4420", 00:29:11.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.132 "hdgst": false, 00:29:11.132 "ddgst": false 00:29:11.132 }, 00:29:11.132 "method": "bdev_nvme_attach_controller" 00:29:11.132 }' 00:29:11.132 [2024-11-20 10:07:44.294504] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:11.132 [2024-11-20 10:07:44.294550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839750 ] 00:29:11.132 [2024-11-20 10:07:44.370969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.132 [2024-11-20 10:07:44.411604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.132 Running I/O for 10 seconds... 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1119 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1119 -ge 100 ']' 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.701 [2024-11-20 10:07:45.203915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.203997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.204074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224fec0 is same with the state(6) to be set 00:29:11.701 [2024-11-20 10:07:45.208730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:11.701 [2024-11-20 10:07:45.208762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.208772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:11.701 [2024-11-20 10:07:45.208779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.208787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.701 [2024-11-20 10:07:45.208794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.208809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:11.701 [2024-11-20 10:07:45.208816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.208822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa500 is same with the state(6) to be set 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:11.701 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.701 [2024-11-20 10:07:45.209569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.701 [2024-11-20 10:07:45.209740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.701 [2024-11-20 10:07:45.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:11.702 [2024-11-20 10:07:45.209777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.209990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.702 [2024-11-20 10:07:45.210331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.702 [2024-11-20 10:07:45.210337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.210554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.703 [2024-11-20 10:07:45.210560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.211491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:11.703 task offset: 24576 on job bdev=Nvme0n1 fails 00:29:11.703 00:29:11.703 Latency(us) 00:29:11.703 [2024-11-20T09:07:45.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.703 Job: Nvme0n1 ended in about 0.60 seconds with error 00:29:11.703 Verification LBA range: start 0x0 length 0x400 00:29:11.703 Nvme0n1 : 0.60 2014.58 125.91 106.03 0.00 29562.95 1412.14 26713.72 00:29:11.703 [2024-11-20T09:07:45.285Z] =================================================================================================================== 00:29:11.703 [2024-11-20T09:07:45.285Z] Total : 2014.58 125.91 106.03 0.00 29562.95 1412.14 26713.72 00:29:11.703 [2024-11-20 10:07:45.213832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:11.703 [2024-11-20 10:07:45.213853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa500 (9): Bad file descriptor 00:29:11.703 [2024-11-20 10:07:45.214892] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:11.703 [2024-11-20 10:07:45.214955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:11.703 [2024-11-20 10:07:45.214977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:11.703 [2024-11-20 10:07:45.214989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:11.703 [2024-11-20 10:07:45.214997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:11.703 [2024-11-20 10:07:45.215005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.703 [2024-11-20 10:07:45.215011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15aa500 00:29:11.703 [2024-11-20 10:07:45.215030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa500 (9): Bad file descriptor 00:29:11.703 [2024-11-20 10:07:45.215044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:11.703 [2024-11-20 10:07:45.215052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:11.703 [2024-11-20 10:07:45.215061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:11.703 [2024-11-20 10:07:45.215070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:11.703 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.703 10:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2839750 00:29:13.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2839750) - No such process 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:13.079 { 00:29:13.079 "params": { 00:29:13.079 "name": "Nvme$subsystem", 00:29:13.079 "trtype": "$TEST_TRANSPORT", 00:29:13.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.079 "adrfam": "ipv4", 00:29:13.079 "trsvcid": "$NVMF_PORT", 00:29:13.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.079 "hdgst": ${hdgst:-false}, 00:29:13.079 "ddgst": ${ddgst:-false} 00:29:13.079 }, 00:29:13.079 "method": "bdev_nvme_attach_controller" 00:29:13.079 } 00:29:13.079 EOF 00:29:13.079 )") 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:13.079 10:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:13.079 "params": { 00:29:13.079 "name": "Nvme0", 00:29:13.079 "trtype": "tcp", 00:29:13.079 "traddr": "10.0.0.2", 00:29:13.079 "adrfam": "ipv4", 00:29:13.079 "trsvcid": "4420", 00:29:13.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.079 "hdgst": false, 00:29:13.079 "ddgst": false 00:29:13.079 }, 00:29:13.079 "method": "bdev_nvme_attach_controller" 00:29:13.079 }' 00:29:13.079 [2024-11-20 10:07:46.276427] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:13.079 [2024-11-20 10:07:46.276478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840161 ] 00:29:13.079 [2024-11-20 10:07:46.353717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.079 [2024-11-20 10:07:46.392419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.339 Running I/O for 1 seconds... 00:29:14.275 1984.00 IOPS, 124.00 MiB/s 00:29:14.275 Latency(us) 00:29:14.275 [2024-11-20T09:07:47.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.275 Verification LBA range: start 0x0 length 0x400 00:29:14.275 Nvme0n1 : 1.00 2038.67 127.42 0.00 0.00 30903.54 6428.77 27088.21 00:29:14.275 [2024-11-20T09:07:47.857Z] =================================================================================================================== 00:29:14.275 [2024-11-20T09:07:47.857Z] Total : 2038.67 127.42 0.00 0.00 30903.54 6428.77 27088.21 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:14.534 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.535 rmmod nvme_tcp 00:29:14.535 rmmod nvme_fabrics 00:29:14.535 rmmod nvme_keyring 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2839698 ']' 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2839698 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2839698 ']' 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2839698 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.535 10:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2839698 00:29:14.535 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.535 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.535 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2839698' 00:29:14.535 killing process with pid 2839698 00:29:14.535 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2839698 00:29:14.535 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2839698 00:29:14.794 [2024-11-20 10:07:48.162242] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.794 10:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.699 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.699 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:16.699 00:29:16.699 real 0m12.569s 00:29:16.699 user 0m19.091s 00:29:16.699 sys 0m6.447s 00:29:16.699 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.699 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:16.699 ************************************ 00:29:16.699 END TEST nvmf_host_management 00:29:16.699 ************************************ 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:16.959 ************************************ 00:29:16.959 START TEST nvmf_lvol 00:29:16.959 ************************************ 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:16.959 * Looking for test storage... 00:29:16.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.959 --rc genhtml_branch_coverage=1 00:29:16.959 --rc genhtml_function_coverage=1 00:29:16.959 --rc genhtml_legend=1 00:29:16.959 --rc geninfo_all_blocks=1 00:29:16.959 --rc geninfo_unexecuted_blocks=1 00:29:16.959 00:29:16.959 ' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.959 --rc genhtml_branch_coverage=1 00:29:16.959 --rc genhtml_function_coverage=1 00:29:16.959 --rc genhtml_legend=1 00:29:16.959 --rc geninfo_all_blocks=1 00:29:16.959 --rc geninfo_unexecuted_blocks=1 00:29:16.959 00:29:16.959 ' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.959 --rc genhtml_branch_coverage=1 00:29:16.959 --rc genhtml_function_coverage=1 00:29:16.959 --rc genhtml_legend=1 00:29:16.959 --rc geninfo_all_blocks=1 00:29:16.959 --rc geninfo_unexecuted_blocks=1 00:29:16.959 00:29:16.959 ' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.959 --rc genhtml_branch_coverage=1 00:29:16.959 --rc genhtml_function_coverage=1 00:29:16.959 --rc genhtml_legend=1 00:29:16.959 --rc geninfo_all_blocks=1 00:29:16.959 --rc geninfo_unexecuted_blocks=1 00:29:16.959 00:29:16.959 ' 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:16.959 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.219 10:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:23.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:23.787 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.787 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:23.788 Found net devices under 0000:86:00.0: cvl_0_0 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:23.788 Found net devices under 0000:86:00.1: cvl_0_1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:29:23.788 00:29:23.788 --- 10.0.0.2 ping statistics --- 00:29:23.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.788 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:29:23.788 00:29:23.788 --- 10.0.0.1 ping statistics --- 00:29:23.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.788 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2843898 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2843898 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2843898 ']' 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.788 10:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:23.788 [2024-11-20 10:07:56.500265] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:23.788 [2024-11-20 10:07:56.501156] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:23.788 [2024-11-20 10:07:56.501191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.788 [2024-11-20 10:07:56.576658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.788 [2024-11-20 10:07:56.618110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.788 [2024-11-20 10:07:56.618147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.788 [2024-11-20 10:07:56.618154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.788 [2024-11-20 10:07:56.618160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.788 [2024-11-20 10:07:56.618166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.788 [2024-11-20 10:07:56.619492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.788 [2024-11-20 10:07:56.619601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.788 [2024-11-20 10:07:56.619603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.788 [2024-11-20 10:07:56.686313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:23.789 [2024-11-20 10:07:56.687205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:23.789 [2024-11-20 10:07:56.687511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:23.789 [2024-11-20 10:07:56.687645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:23.789 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.789 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:23.789 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.789 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.789 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:24.047 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.047 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:24.047 [2024-11-20 10:07:57.528393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.047 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:24.305 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:24.305 10:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:24.564 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:24.564 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:24.823 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:25.081 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3143fc72-cf12-4326-97c8-b02549369936 00:29:25.081 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3143fc72-cf12-4326-97c8-b02549369936 lvol 20 00:29:25.081 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=08ca0e9f-b5af-4c04-9ab5-441e72312fe4 00:29:25.081 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:25.340 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08ca0e9f-b5af-4c04-9ab5-441e72312fe4 00:29:25.599 10:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.599 [2024-11-20 10:07:59.156235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.856 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.856 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2844433 00:29:25.856 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:25.856 10:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:27.231 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 08ca0e9f-b5af-4c04-9ab5-441e72312fe4 MY_SNAPSHOT 00:29:27.231 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=559ef990-4417-4d36-b193-520c143647e6 00:29:27.231 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 08ca0e9f-b5af-4c04-9ab5-441e72312fe4 30 00:29:27.490 10:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 559ef990-4417-4d36-b193-520c143647e6 MY_CLONE 00:29:27.749 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b1c9ddab-b7fe-462c-861d-144065e94e57 00:29:27.749 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b1c9ddab-b7fe-462c-861d-144065e94e57 00:29:28.008 10:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2844433 00:29:37.985 Initializing NVMe Controllers 00:29:37.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:37.985 Controller IO queue size 128, less than required. 00:29:37.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:37.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:37.985 Initialization complete. Launching workers. 00:29:37.985 ======================================================== 00:29:37.985 Latency(us) 00:29:37.985 Device Information : IOPS MiB/s Average min max 00:29:37.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12222.30 47.74 10475.35 3852.93 54196.97 00:29:37.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12406.70 48.46 10320.99 1787.00 71700.51 00:29:37.985 ======================================================== 00:29:37.985 Total : 24629.00 96.21 10397.59 1787.00 71700.51 00:29:37.985 00:29:37.985 10:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.985 10:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08ca0e9f-b5af-4c04-9ab5-441e72312fe4 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3143fc72-cf12-4326-97c8-b02549369936 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.985 rmmod nvme_tcp 00:29:37.985 rmmod nvme_fabrics 00:29:37.985 rmmod nvme_keyring 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2843898 ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2843898 ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2843898' 00:29:37.985 killing process with pid 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2843898 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.985 10:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.362 00:29:39.362 real 0m22.429s 00:29:39.362 user 0m55.694s 00:29:39.362 sys 0m9.867s 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:39.362 ************************************ 00:29:39.362 END TEST nvmf_lvol 00:29:39.362 ************************************ 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:39.362 ************************************ 00:29:39.362 START TEST nvmf_lvs_grow 00:29:39.362 ************************************ 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:39.362 * Looking for test storage... 00:29:39.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.362 10:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.622 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.622 --rc genhtml_branch_coverage=1 00:29:39.622 --rc genhtml_function_coverage=1 00:29:39.622 --rc genhtml_legend=1 00:29:39.622 --rc geninfo_all_blocks=1 00:29:39.622 --rc geninfo_unexecuted_blocks=1 00:29:39.623 00:29:39.623 ' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.623 --rc genhtml_branch_coverage=1 00:29:39.623 --rc genhtml_function_coverage=1 00:29:39.623 --rc genhtml_legend=1 00:29:39.623 --rc geninfo_all_blocks=1 00:29:39.623 --rc geninfo_unexecuted_blocks=1 00:29:39.623 00:29:39.623 ' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.623 --rc genhtml_branch_coverage=1 00:29:39.623 --rc genhtml_function_coverage=1 00:29:39.623 --rc genhtml_legend=1 00:29:39.623 --rc geninfo_all_blocks=1 00:29:39.623 --rc geninfo_unexecuted_blocks=1 00:29:39.623 00:29:39.623 ' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.623 --rc genhtml_branch_coverage=1 00:29:39.623 --rc genhtml_function_coverage=1 00:29:39.623 --rc genhtml_legend=1 00:29:39.623 --rc geninfo_all_blocks=1 00:29:39.623 --rc geninfo_unexecuted_blocks=1 00:29:39.623 00:29:39.623 ' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.623 10:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.193 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:46.193 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:46.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:46.194 Found net devices under 0000:86:00.0: cvl_0_0 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:46.194 Found net devices under 0000:86:00.1: cvl_0_1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:29:46.194 00:29:46.194 --- 10.0.0.2 ping statistics --- 00:29:46.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.194 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:46.194 00:29:46.194 --- 10.0.0.1 ping statistics --- 00:29:46.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.194 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:46.194 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2849609 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2849609 00:29:46.195 10:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2849609 ']' 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.195 [2024-11-20 10:08:19.045832] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:46.195 [2024-11-20 10:08:19.046731] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:46.195 [2024-11-20 10:08:19.046762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.195 [2024-11-20 10:08:19.125728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.195 [2024-11-20 10:08:19.166515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.195 [2024-11-20 10:08:19.166552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.195 [2024-11-20 10:08:19.166559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.195 [2024-11-20 10:08:19.166565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.195 [2024-11-20 10:08:19.166570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.195 [2024-11-20 10:08:19.167088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.195 [2024-11-20 10:08:19.234110] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:46.195 [2024-11-20 10:08:19.234329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:46.195 [2024-11-20 10:08:19.471759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.195 ************************************ 00:29:46.195 START TEST lvs_grow_clean 00:29:46.195 ************************************ 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.195 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:46.455 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:46.455 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:46.455 10:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:46.455 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:46.455 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:46.714 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:46.714 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:46.714 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 lvol 150 00:29:46.973 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 00:29:46.974 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.974 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:47.233 [2024-11-20 10:08:20.579503] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:47.233 [2024-11-20 10:08:20.579638] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:47.233 true 00:29:47.233 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:47.233 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:47.233 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:47.233 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:47.492 10:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 00:29:47.751 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.011 [2024-11-20 10:08:21.343954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2850101 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2850101 /var/tmp/bdevperf.sock 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2850101 ']' 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.011 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:48.271 [2024-11-20 10:08:21.595014] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:29:48.271 [2024-11-20 10:08:21.595060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850101 ] 00:29:48.271 [2024-11-20 10:08:21.667069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.271 [2024-11-20 10:08:21.708889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.271 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.271 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:48.271 10:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:48.839 Nvme0n1 00:29:48.839 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:48.839 [ 00:29:48.839 { 00:29:48.839 "name": "Nvme0n1", 00:29:48.839 "aliases": [ 00:29:48.839 "62b72b9e-d5a2-4d9e-9f39-4eafff6b5921" 00:29:48.839 ], 00:29:48.839 "product_name": "NVMe disk", 00:29:48.839 "block_size": 4096, 00:29:48.839 "num_blocks": 38912, 00:29:48.839 "uuid": "62b72b9e-d5a2-4d9e-9f39-4eafff6b5921", 00:29:48.839 "numa_id": 1, 00:29:48.839 "assigned_rate_limits": { 00:29:48.839 "rw_ios_per_sec": 0, 00:29:48.839 "rw_mbytes_per_sec": 0, 00:29:48.839 "r_mbytes_per_sec": 0, 00:29:48.839 "w_mbytes_per_sec": 0 00:29:48.839 }, 00:29:48.839 "claimed": false, 00:29:48.839 "zoned": false, 00:29:48.839 "supported_io_types": { 00:29:48.839 "read": true, 00:29:48.839 "write": true, 00:29:48.839 "unmap": true, 00:29:48.839 "flush": true, 00:29:48.839 "reset": true, 00:29:48.839 "nvme_admin": true, 00:29:48.839 "nvme_io": true, 00:29:48.839 "nvme_io_md": false, 00:29:48.839 "write_zeroes": true, 00:29:48.839 "zcopy": false, 00:29:48.839 "get_zone_info": false, 00:29:48.839 "zone_management": false, 00:29:48.839 "zone_append": false, 00:29:48.839 "compare": true, 00:29:48.839 "compare_and_write": true, 00:29:48.839 "abort": true, 00:29:48.839 "seek_hole": false, 00:29:48.839 "seek_data": false, 00:29:48.839 "copy": true, 00:29:48.839 "nvme_iov_md": false 00:29:48.839 }, 00:29:48.839 "memory_domains": [ 00:29:48.839 { 00:29:48.839 "dma_device_id": "system", 00:29:48.839 "dma_device_type": 1 00:29:48.839 } 00:29:48.839 ], 00:29:48.839 "driver_specific": { 00:29:48.839 "nvme": [ 00:29:48.839 { 00:29:48.839 "trid": { 00:29:48.839 "trtype": "TCP", 00:29:48.839 "adrfam": "IPv4", 00:29:48.839 "traddr": "10.0.0.2", 00:29:48.839 "trsvcid": "4420", 00:29:48.839 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:48.839 }, 00:29:48.839 "ctrlr_data": { 00:29:48.839 "cntlid": 1, 00:29:48.839 "vendor_id": "0x8086", 00:29:48.839 "model_number": "SPDK bdev Controller", 00:29:48.839 "serial_number": "SPDK0", 00:29:48.839 "firmware_revision": "25.01", 00:29:48.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.839 "oacs": { 00:29:48.839 "security": 0, 00:29:48.840 "format": 0, 00:29:48.840 "firmware": 0, 00:29:48.840 "ns_manage": 0 00:29:48.840 }, 00:29:48.840 "multi_ctrlr": true, 00:29:48.840 "ana_reporting": false 00:29:48.840 }, 00:29:48.840 "vs": { 00:29:48.840 "nvme_version": "1.3" 00:29:48.840 }, 00:29:48.840 "ns_data": { 00:29:48.840 "id": 1, 00:29:48.840 "can_share": true 00:29:48.840 } 00:29:48.840 } 00:29:48.840 ], 00:29:48.840 "mp_policy": "active_passive" 00:29:48.840 } 00:29:48.840 } 00:29:48.840 ] 00:29:48.840 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2850186 00:29:48.840 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:48.840 10:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.099 Running I/O for 10 seconds... 00:29:50.036 Latency(us) 00:29:50.036 [2024-11-20T09:08:23.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.036 Nvme0n1 : 1.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:50.036 [2024-11-20T09:08:23.618Z] =================================================================================================================== 00:29:50.036 [2024-11-20T09:08:23.618Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:50.036 00:29:50.974 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:50.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.974 Nvme0n1 : 2.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:50.974 [2024-11-20T09:08:24.556Z] =================================================================================================================== 00:29:50.974 [2024-11-20T09:08:24.556Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:50.974 00:29:51.234 true 00:29:51.234 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:51.234 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:51.234 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:51.234 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:51.234 10:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2850186 00:29:52.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.171 Nvme0n1 : 3.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:29:52.171 [2024-11-20T09:08:25.753Z] =================================================================================================================== 00:29:52.171 [2024-11-20T09:08:25.753Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:29:52.171 00:29:53.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.108 Nvme0n1 : 4.00 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:29:53.108 [2024-11-20T09:08:26.690Z] =================================================================================================================== 00:29:53.108 [2024-11-20T09:08:26.690Z] Total : 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:29:53.108 00:29:54.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.065 Nvme0n1 : 5.00 23215.60 90.69 0.00 0.00 0.00 0.00 0.00 00:29:54.065 [2024-11-20T09:08:27.647Z] =================================================================================================================== 00:29:54.065 [2024-11-20T09:08:27.647Z] Total : 23215.60 90.69 0.00 0.00 0.00 0.00 0.00 00:29:54.065 00:29:55.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.109 Nvme0n1 : 6.00 23262.17 90.87 0.00 0.00 0.00 0.00 0.00 00:29:55.109 [2024-11-20T09:08:28.691Z] =================================================================================================================== 00:29:55.109 [2024-11-20T09:08:28.691Z] Total : 23262.17 90.87 0.00 0.00 0.00 0.00 0.00 00:29:55.109 00:29:56.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.046 Nvme0n1 : 7.00 23304.57 91.03 0.00 0.00 0.00 0.00 0.00 00:29:56.046 [2024-11-20T09:08:29.628Z] =================================================================================================================== 00:29:56.046 [2024-11-20T09:08:29.628Z] Total : 23304.57 91.03 0.00 0.00 0.00 0.00 0.00 00:29:56.046 00:29:56.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.981 Nvme0n1 : 8.00 23292.88 90.99 0.00 0.00 0.00 0.00 0.00 00:29:56.981 [2024-11-20T09:08:30.563Z] =================================================================================================================== 00:29:56.981 [2024-11-20T09:08:30.563Z] Total : 23292.88 90.99 0.00 0.00 0.00 0.00 0.00 00:29:56.981 00:29:58.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.360 Nvme0n1 : 9.00 23301.22 91.02 0.00 0.00 0.00 0.00 0.00 00:29:58.360 [2024-11-20T09:08:31.942Z] =================================================================================================================== 00:29:58.360 [2024-11-20T09:08:31.942Z] Total : 23301.22 91.02 0.00 0.00 0.00 0.00 0.00 00:29:58.360 00:29:58.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.928 Nvme0n1 : 10.00 23333.30 91.15 0.00 0.00 0.00 0.00 0.00 00:29:58.928 [2024-11-20T09:08:32.510Z] =================================================================================================================== 00:29:58.928 [2024-11-20T09:08:32.510Z] Total : 23333.30 91.15 0.00 0.00 0.00 0.00 0.00 00:29:58.928 00:29:58.928 00:29:58.928 Latency(us) 00:29:58.928 [2024-11-20T09:08:32.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.928 Nvme0n1 : 10.00 23334.56 91.15 0.00 0.00 5482.57 3292.40 27462.70 00:29:58.928 [2024-11-20T09:08:32.510Z] =================================================================================================================== 00:29:58.928 [2024-11-20T09:08:32.510Z] Total : 23334.56 91.15 0.00 0.00 5482.57 3292.40 27462.70 00:29:59.187 { 00:29:59.187 "results": [ 00:29:59.187 { 00:29:59.187 "job": "Nvme0n1", 00:29:59.187 "core_mask": "0x2", 00:29:59.187 "workload": "randwrite", 00:29:59.187 "status": "finished", 00:29:59.187 "queue_depth": 128, 00:29:59.187 "io_size": 4096, 00:29:59.187 "runtime": 10.004944, 00:29:59.187 "iops": 23334.563391859065, 00:29:59.187 "mibps": 91.15063824944947, 00:29:59.187 "io_failed": 0, 00:29:59.187 "io_timeout": 0, 00:29:59.187 "avg_latency_us": 5482.574400936956, 00:29:59.187 "min_latency_us": 3292.4038095238097, 00:29:59.187 "max_latency_us": 27462.704761904763 00:29:59.187 } 00:29:59.187 ], 00:29:59.187 "core_count": 1 00:29:59.187 } 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2850101 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2850101 ']' 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2850101 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850101 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850101' 00:29:59.187 killing process with pid 2850101 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2850101 00:29:59.187 Received shutdown signal, test time was about 10.000000 seconds 00:29:59.187 00:29:59.187 Latency(us) 00:29:59.187 [2024-11-20T09:08:32.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.187 [2024-11-20T09:08:32.769Z] =================================================================================================================== 00:29:59.187 [2024-11-20T09:08:32.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2850101 00:29:59.187 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.446 10:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.705 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:59.705 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:59.965 [2024-11-20 10:08:33.491538] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:59.965 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:30:00.224 request: 00:30:00.224 { 00:30:00.224 "uuid": "f1251ef0-2448-4a5d-97ae-c678d83b8ba4", 00:30:00.224 "method": "bdev_lvol_get_lvstores", 00:30:00.224 "req_id": 1 00:30:00.224 } 00:30:00.224 Got JSON-RPC error response 00:30:00.224 response: 00:30:00.224 { 00:30:00.224 "code": -19, 00:30:00.224 "message": "No such device" 00:30:00.224 } 00:30:00.224 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:00.224 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:00.224 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:00.224 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:00.224 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:00.483 aio_bdev 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:00.483 10:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:00.743 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 -t 2000 00:30:00.743 [ 00:30:00.743 { 00:30:00.743 "name": "62b72b9e-d5a2-4d9e-9f39-4eafff6b5921", 00:30:00.743 "aliases": [ 00:30:00.743 "lvs/lvol" 00:30:00.743 ], 00:30:00.743 "product_name": "Logical Volume", 00:30:00.743 "block_size": 4096, 00:30:00.743 "num_blocks": 38912, 00:30:00.743 "uuid": "62b72b9e-d5a2-4d9e-9f39-4eafff6b5921", 00:30:00.743 "assigned_rate_limits": { 00:30:00.743 "rw_ios_per_sec": 0, 00:30:00.743 "rw_mbytes_per_sec": 0, 00:30:00.743 "r_mbytes_per_sec": 0, 00:30:00.743 "w_mbytes_per_sec": 0 00:30:00.743 }, 00:30:00.743 "claimed": false, 00:30:00.743 "zoned": false, 00:30:00.743 "supported_io_types": { 00:30:00.743 "read": true, 00:30:00.743 "write": true, 00:30:00.743 "unmap": true, 00:30:00.743 "flush": false, 00:30:00.743 "reset": true, 00:30:00.743 "nvme_admin": false, 00:30:00.743 "nvme_io": false, 00:30:00.743 "nvme_io_md": false, 00:30:00.743 "write_zeroes": true, 00:30:00.743 "zcopy": false, 00:30:00.743 "get_zone_info": false, 00:30:00.743 "zone_management": false, 00:30:00.743 "zone_append": false, 00:30:00.743 "compare": false, 00:30:00.743 "compare_and_write": false, 00:30:00.743 "abort": false, 00:30:00.743 "seek_hole": true, 00:30:00.743 "seek_data": true, 00:30:00.743 "copy": false, 00:30:00.743 "nvme_iov_md": false 00:30:00.743 }, 00:30:00.743 "driver_specific": { 00:30:00.743 "lvol": { 00:30:00.743 "lvol_store_uuid": "f1251ef0-2448-4a5d-97ae-c678d83b8ba4", 00:30:00.743 "base_bdev": "aio_bdev", 00:30:00.743 "thin_provision": false, 00:30:00.743 "num_allocated_clusters": 38, 00:30:00.743 "snapshot": false, 00:30:00.743 "clone": false, 00:30:00.743 "esnap_clone": false 00:30:00.743 } 00:30:00.743 } 00:30:00.743 } 00:30:00.743 ] 00:30:00.743 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:00.743 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:30:00.743 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:01.003 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:01.003 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:30:01.003 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:01.262 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:01.262 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62b72b9e-d5a2-4d9e-9f39-4eafff6b5921 00:30:01.521 10:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1251ef0-2448-4a5d-97ae-c678d83b8ba4 00:30:01.521 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:01.781 00:30:01.781 real 0m15.738s 00:30:01.781 user 0m15.191s 00:30:01.781 sys 0m1.527s 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:01.781 ************************************ 00:30:01.781 END TEST lvs_grow_clean 00:30:01.781 ************************************ 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:01.781 ************************************ 00:30:01.781 START TEST lvs_grow_dirty 00:30:01.781 ************************************ 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:01.781 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:02.040 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:02.040 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:02.040 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:02.299 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=73f9b756-f750-44f0-8b43-e61921be6430 00:30:02.299 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:02.299 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:02.558 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:02.558 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:02.558 10:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73f9b756-f750-44f0-8b43-e61921be6430 lvol 150 00:30:02.817 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:02.817 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:02.817 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:02.818 [2024-11-20 10:08:36.327477] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:02.818 [2024-11-20 10:08:36.327606] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:02.818 true 00:30:02.818 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:02.818 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:03.077 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:03.077 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:03.336 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:03.336 10:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:03.595 [2024-11-20 10:08:37.067932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.595 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2852700 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2852700 /var/tmp/bdevperf.sock 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2852700 ']' 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.854 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.854 [2024-11-20 10:08:37.329395] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:03.854 [2024-11-20 10:08:37.329441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852700 ] 00:30:03.854 [2024-11-20 10:08:37.401232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.113 [2024-11-20 10:08:37.443705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.113 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.113 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:04.113 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:04.372 Nvme0n1 00:30:04.372 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:04.631 [ 00:30:04.631 { 00:30:04.631 "name": "Nvme0n1", 00:30:04.631 "aliases": [ 00:30:04.631 "67378964-efba-4d5f-9f7b-bde8a23c653b" 00:30:04.631 ], 00:30:04.631 "product_name": "NVMe disk", 00:30:04.631 "block_size": 4096, 00:30:04.631 "num_blocks": 38912, 00:30:04.631 "uuid": "67378964-efba-4d5f-9f7b-bde8a23c653b", 00:30:04.631 "numa_id": 1, 00:30:04.631 "assigned_rate_limits": { 00:30:04.631 "rw_ios_per_sec": 0, 00:30:04.631 "rw_mbytes_per_sec": 0, 00:30:04.631 "r_mbytes_per_sec": 0, 00:30:04.631 "w_mbytes_per_sec": 0 00:30:04.631 }, 00:30:04.631 "claimed": false, 00:30:04.631 "zoned": false, 00:30:04.631 "supported_io_types": { 00:30:04.631 "read": true, 00:30:04.631 "write": true, 00:30:04.631 "unmap": true, 00:30:04.631 "flush": true, 00:30:04.631 "reset": true, 00:30:04.631 "nvme_admin": true, 00:30:04.631 "nvme_io": true, 00:30:04.631 "nvme_io_md": false, 00:30:04.631 "write_zeroes": true, 00:30:04.631 "zcopy": false, 00:30:04.631 "get_zone_info": false, 00:30:04.631 "zone_management": false, 00:30:04.631 "zone_append": false, 00:30:04.631 "compare": true, 00:30:04.631 "compare_and_write": true, 00:30:04.631 "abort": true, 00:30:04.631 "seek_hole": false, 00:30:04.631 "seek_data": false, 00:30:04.631 "copy": true, 00:30:04.631 "nvme_iov_md": false 00:30:04.632 }, 00:30:04.632 "memory_domains": [ 00:30:04.632 { 00:30:04.632 "dma_device_id": "system", 00:30:04.632 "dma_device_type": 1 00:30:04.632 } 00:30:04.632 ], 00:30:04.632 "driver_specific": { 00:30:04.632 "nvme": [ 00:30:04.632 { 00:30:04.632 "trid": { 00:30:04.632 "trtype": "TCP", 00:30:04.632 "adrfam": "IPv4", 00:30:04.632 "traddr": "10.0.0.2", 00:30:04.632 "trsvcid": "4420", 00:30:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:04.632 }, 00:30:04.632 "ctrlr_data": { 00:30:04.632 "cntlid": 1, 00:30:04.632 "vendor_id": "0x8086", 00:30:04.632 "model_number": "SPDK bdev Controller", 00:30:04.632 "serial_number": "SPDK0", 00:30:04.632 "firmware_revision": "25.01", 00:30:04.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.632 "oacs": { 00:30:04.632 "security": 0, 00:30:04.632 "format": 0, 00:30:04.632 "firmware": 0, 00:30:04.632 "ns_manage": 0 00:30:04.632 }, 00:30:04.632 "multi_ctrlr": true, 00:30:04.632 "ana_reporting": false 00:30:04.632 }, 00:30:04.632 "vs": { 00:30:04.632 "nvme_version": "1.3" 00:30:04.632 }, 00:30:04.632 "ns_data": { 00:30:04.632 "id": 1, 00:30:04.632 "can_share": true 00:30:04.632 } 00:30:04.632 } 00:30:04.632 ], 00:30:04.632 "mp_policy": "active_passive" 00:30:04.632 } 00:30:04.632 } 00:30:04.632 ] 00:30:04.632 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2852718 00:30:04.632 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:04.632 10:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:04.632 Running I/O for 10 seconds... 00:30:05.568 Latency(us) 00:30:05.568 [2024-11-20T09:08:39.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.568 Nvme0n1 : 1.00 22797.00 89.05 0.00 0.00 0.00 0.00 0.00 00:30:05.568 [2024-11-20T09:08:39.150Z] =================================================================================================================== 00:30:05.568 [2024-11-20T09:08:39.150Z] Total : 22797.00 89.05 0.00 0.00 0.00 0.00 0.00 00:30:05.568 00:30:06.501 10:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:06.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.501 Nvme0n1 : 2.00 23075.00 90.14 0.00 0.00 0.00 0.00 0.00 00:30:06.501 [2024-11-20T09:08:40.083Z] =================================================================================================================== 00:30:06.501 [2024-11-20T09:08:40.083Z] Total : 23075.00 90.14 0.00 0.00 0.00 0.00 0.00 00:30:06.501 00:30:06.761 true 00:30:06.761 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:06.761 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:07.020 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:07.020 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:07.020 10:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2852718 00:30:07.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:07.588 Nvme0n1 : 3.00 23151.67 90.44 0.00 0.00 0.00 0.00 0.00 00:30:07.588 [2024-11-20T09:08:41.170Z] =================================================================================================================== 00:30:07.588 [2024-11-20T09:08:41.170Z] Total : 23151.67 90.44 0.00 0.00 0.00 0.00 0.00 00:30:07.588 00:30:08.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.524 Nvme0n1 : 4.00 23261.75 90.87 0.00 0.00 0.00 0.00 0.00 00:30:08.524 [2024-11-20T09:08:42.106Z] =================================================================================================================== 00:30:08.524 [2024-11-20T09:08:42.106Z] Total : 23261.75 90.87 0.00 0.00 0.00 0.00 0.00 00:30:08.524 00:30:09.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.901 Nvme0n1 : 5.00 23333.80 91.15 0.00 0.00 0.00 0.00 0.00 00:30:09.901 [2024-11-20T09:08:43.483Z] =================================================================================================================== 00:30:09.901 [2024-11-20T09:08:43.483Z] Total : 23333.80 91.15 0.00 0.00 0.00 0.00 0.00 00:30:09.901 00:30:10.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.839 Nvme0n1 : 6.00 23381.83 91.34 0.00 0.00 0.00 0.00 0.00 00:30:10.839 [2024-11-20T09:08:44.421Z] =================================================================================================================== 00:30:10.839 [2024-11-20T09:08:44.421Z] Total : 23381.83 91.34 0.00 0.00 0.00 0.00 0.00 00:30:10.839 00:30:11.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.779 Nvme0n1 : 7.00 23421.00 91.49 0.00 0.00 0.00 0.00 0.00 00:30:11.779 [2024-11-20T09:08:45.361Z] =================================================================================================================== 00:30:11.779 [2024-11-20T09:08:45.361Z] Total : 23421.00 91.49 0.00 0.00 0.00 0.00 0.00 00:30:11.779 00:30:12.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.714 Nvme0n1 : 8.00 23462.00 91.65 0.00 0.00 0.00 0.00 0.00 00:30:12.714 [2024-11-20T09:08:46.296Z] =================================================================================================================== 00:30:12.714 [2024-11-20T09:08:46.296Z] Total : 23462.00 91.65 0.00 0.00 0.00 0.00 0.00 00:30:12.714 00:30:13.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.651 Nvme0n1 : 9.00 23479.78 91.72 0.00 0.00 0.00 0.00 0.00 00:30:13.651 [2024-11-20T09:08:47.233Z] =================================================================================================================== 00:30:13.651 [2024-11-20T09:08:47.233Z] Total : 23479.78 91.72 0.00 0.00 0.00 0.00 0.00 00:30:13.651 00:30:14.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.588 Nvme0n1 : 10.00 23506.70 91.82 0.00 0.00 0.00 0.00 0.00 00:30:14.588 [2024-11-20T09:08:48.170Z] =================================================================================================================== 00:30:14.588 [2024-11-20T09:08:48.170Z] Total : 23506.70 91.82 0.00 0.00 0.00 0.00 0.00 00:30:14.588 00:30:14.588 00:30:14.588 Latency(us) 00:30:14.588 [2024-11-20T09:08:48.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.588 Nvme0n1 : 10.00 23509.68 91.83 0.00 0.00 5441.64 3089.55 26713.72 00:30:14.588 [2024-11-20T09:08:48.170Z] =================================================================================================================== 00:30:14.588 [2024-11-20T09:08:48.170Z] Total : 23509.68 91.83 0.00 0.00 5441.64 3089.55 26713.72 00:30:14.588 { 00:30:14.588 "results": [ 00:30:14.588 { 00:30:14.588 "job": "Nvme0n1", 00:30:14.588 "core_mask": "0x2", 00:30:14.588 "workload": "randwrite", 00:30:14.588 "status": "finished", 00:30:14.588 "queue_depth": 128, 00:30:14.588 "io_size": 4096, 00:30:14.588 "runtime": 10.004176, 00:30:14.588 "iops": 23509.682356647863, 00:30:14.588 "mibps": 91.83469670565572, 00:30:14.588 "io_failed": 0, 00:30:14.588 "io_timeout": 0, 00:30:14.588 "avg_latency_us": 5441.639872114223, 00:30:14.588 "min_latency_us": 3089.554285714286, 00:30:14.588 "max_latency_us": 26713.721904761904 00:30:14.588 } 00:30:14.588 ], 00:30:14.588 "core_count": 1 00:30:14.588 } 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2852700 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2852700 ']' 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2852700 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.588 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852700 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852700' 00:30:14.847 killing process with pid 2852700 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2852700 00:30:14.847 Received shutdown signal, test time was about 10.000000 seconds 00:30:14.847 00:30:14.847 Latency(us) 00:30:14.847 [2024-11-20T09:08:48.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.847 [2024-11-20T09:08:48.429Z] =================================================================================================================== 00:30:14.847 [2024-11-20T09:08:48.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2852700 00:30:14.847 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.106 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2849609 00:30:15.364 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2849609 00:30:15.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2849609 Killed "${NVMF_APP[@]}" "$@" 00:30:15.622 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:15.622 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:15.622 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.622 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2854554 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2854554 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2854554 ']' 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.623 10:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:15.623 [2024-11-20 10:08:49.039155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.623 [2024-11-20 10:08:49.040055] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:15.623 [2024-11-20 10:08:49.040090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.623 [2024-11-20 10:08:49.118402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.623 [2024-11-20 10:08:49.158381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.623 [2024-11-20 10:08:49.158418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.623 [2024-11-20 10:08:49.158426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.623 [2024-11-20 10:08:49.158432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.623 [2024-11-20 10:08:49.158437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.623 [2024-11-20 10:08:49.158965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.881 [2024-11-20 10:08:49.224775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:15.881 [2024-11-20 10:08:49.224980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.881 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:16.140 [2024-11-20 10:08:49.464429] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:16.140 [2024-11-20 10:08:49.464633] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:16.140 [2024-11-20 10:08:49.464716] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:16.140 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67378964-efba-4d5f-9f7b-bde8a23c653b -t 2000 00:30:16.398 [ 00:30:16.398 { 00:30:16.398 "name": "67378964-efba-4d5f-9f7b-bde8a23c653b", 00:30:16.398 "aliases": [ 00:30:16.398 "lvs/lvol" 00:30:16.398 ], 00:30:16.398 "product_name": "Logical Volume", 00:30:16.398 "block_size": 4096, 00:30:16.398 "num_blocks": 38912, 00:30:16.398 "uuid": "67378964-efba-4d5f-9f7b-bde8a23c653b", 00:30:16.398 "assigned_rate_limits": { 00:30:16.398 "rw_ios_per_sec": 0, 00:30:16.398 "rw_mbytes_per_sec": 0, 00:30:16.398 "r_mbytes_per_sec": 0, 00:30:16.398 "w_mbytes_per_sec": 0 00:30:16.398 }, 00:30:16.398 "claimed": false, 00:30:16.398 "zoned": false, 00:30:16.398 "supported_io_types": { 00:30:16.398 "read": true, 00:30:16.398 "write": true, 00:30:16.398 "unmap": true, 00:30:16.398 "flush": false, 00:30:16.398 "reset": true, 00:30:16.398 "nvme_admin": false, 00:30:16.398 "nvme_io": false, 00:30:16.398 "nvme_io_md": false, 00:30:16.398 "write_zeroes": true, 00:30:16.398 "zcopy": false, 00:30:16.398 "get_zone_info": false, 00:30:16.398 "zone_management": false, 00:30:16.398 "zone_append": false, 00:30:16.398 "compare": false, 00:30:16.398 "compare_and_write": false, 00:30:16.398 "abort": false, 00:30:16.398 "seek_hole": true, 00:30:16.398 "seek_data": true, 00:30:16.398 "copy": false, 00:30:16.398 "nvme_iov_md": false 00:30:16.398 }, 00:30:16.398 "driver_specific": { 00:30:16.398 "lvol": { 00:30:16.398 "lvol_store_uuid": "73f9b756-f750-44f0-8b43-e61921be6430", 00:30:16.398 "base_bdev": "aio_bdev", 00:30:16.399 "thin_provision": false, 00:30:16.399 "num_allocated_clusters": 38, 00:30:16.399 "snapshot": false, 00:30:16.399 "clone": false, 00:30:16.399 "esnap_clone": false 00:30:16.399 } 00:30:16.399 } 00:30:16.399 } 00:30:16.399 ] 00:30:16.399 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:16.399 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:16.399 10:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:16.656 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:16.657 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:16.657 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:16.914 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:16.914 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:16.914 [2024-11-20 10:08:50.467513] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:17.173 request: 00:30:17.173 { 00:30:17.173 "uuid": "73f9b756-f750-44f0-8b43-e61921be6430", 00:30:17.173 "method": "bdev_lvol_get_lvstores", 00:30:17.173 "req_id": 1 00:30:17.173 } 00:30:17.173 Got JSON-RPC error response 00:30:17.173 response: 00:30:17.173 { 00:30:17.173 "code": -19, 00:30:17.173 "message": "No such device" 00:30:17.173 } 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:17.173 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:17.431 aio_bdev 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:17.431 10:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:17.689 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67378964-efba-4d5f-9f7b-bde8a23c653b -t 2000 00:30:17.948 [ 00:30:17.948 { 00:30:17.948 "name": "67378964-efba-4d5f-9f7b-bde8a23c653b", 00:30:17.948 "aliases": [ 00:30:17.948 "lvs/lvol" 00:30:17.948 ], 00:30:17.948 "product_name": "Logical Volume", 00:30:17.948 "block_size": 4096, 00:30:17.948 "num_blocks": 38912, 00:30:17.948 "uuid": "67378964-efba-4d5f-9f7b-bde8a23c653b", 00:30:17.948 "assigned_rate_limits": { 00:30:17.948 "rw_ios_per_sec": 0, 00:30:17.948 "rw_mbytes_per_sec": 0, 00:30:17.948 "r_mbytes_per_sec": 0, 00:30:17.948 "w_mbytes_per_sec": 0 00:30:17.948 }, 00:30:17.948 "claimed": false, 00:30:17.948 "zoned": false, 00:30:17.948 "supported_io_types": { 00:30:17.948 "read": true, 00:30:17.948 "write": true, 00:30:17.948 "unmap": true, 00:30:17.948 "flush": false, 00:30:17.948 "reset": true, 00:30:17.948 "nvme_admin": false, 00:30:17.948 "nvme_io": false, 00:30:17.948 "nvme_io_md": false, 00:30:17.948 "write_zeroes": true, 00:30:17.948 "zcopy": false, 00:30:17.948 "get_zone_info": false, 00:30:17.948 "zone_management": false, 00:30:17.948 "zone_append": false, 00:30:17.948 "compare": false, 00:30:17.948 "compare_and_write": false, 00:30:17.948 "abort": false, 00:30:17.948 "seek_hole": true, 00:30:17.948 "seek_data": true, 00:30:17.948 "copy": false, 00:30:17.948 "nvme_iov_md": false 00:30:17.948 }, 00:30:17.948 "driver_specific": { 00:30:17.948 "lvol": { 00:30:17.948 "lvol_store_uuid": "73f9b756-f750-44f0-8b43-e61921be6430", 00:30:17.948 "base_bdev": "aio_bdev", 00:30:17.948 "thin_provision": false, 00:30:17.948 "num_allocated_clusters": 38, 00:30:17.948 "snapshot": false, 00:30:17.948 "clone": false, 00:30:17.948 "esnap_clone": false 00:30:17.948 } 00:30:17.948 } 00:30:17.948 } 00:30:17.948 ] 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:17.948 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:18.207 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:18.207 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67378964-efba-4d5f-9f7b-bde8a23c653b 00:30:18.466 10:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73f9b756-f750-44f0-8b43-e61921be6430 00:30:18.723 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:18.723 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:18.982 00:30:18.982 real 0m16.961s 00:30:18.982 user 0m34.278s 00:30:18.982 sys 0m3.885s 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:18.982 ************************************ 00:30:18.982 END TEST lvs_grow_dirty 00:30:18.982 ************************************ 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:18.982 nvmf_trace.0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.982 rmmod nvme_tcp 00:30:18.982 rmmod nvme_fabrics 00:30:18.982 rmmod nvme_keyring 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2854554 ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2854554 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2854554 ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2854554 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854554 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854554' 00:30:18.982 killing process with pid 2854554 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2854554 00:30:18.982 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2854554 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.241 10:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.778 00:30:21.778 real 0m41.936s 00:30:21.778 user 0m52.025s 00:30:21.778 sys 0m10.296s 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:21.778 ************************************ 00:30:21.778 END TEST nvmf_lvs_grow 00:30:21.778 ************************************ 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:21.778 ************************************ 00:30:21.778 START TEST nvmf_bdev_io_wait 00:30:21.778 ************************************ 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:21.778 * Looking for test storage... 00:30:21.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.778 10:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.778 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.779 --rc genhtml_branch_coverage=1 00:30:21.779 --rc genhtml_function_coverage=1 00:30:21.779 --rc genhtml_legend=1 00:30:21.779 --rc geninfo_all_blocks=1 00:30:21.779 --rc geninfo_unexecuted_blocks=1 00:30:21.779 00:30:21.779 ' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.779 --rc genhtml_branch_coverage=1 00:30:21.779 --rc genhtml_function_coverage=1 00:30:21.779 --rc genhtml_legend=1 00:30:21.779 --rc geninfo_all_blocks=1 00:30:21.779 --rc geninfo_unexecuted_blocks=1 00:30:21.779 00:30:21.779 ' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.779 --rc genhtml_branch_coverage=1 00:30:21.779 --rc genhtml_function_coverage=1 00:30:21.779 --rc genhtml_legend=1 00:30:21.779 --rc geninfo_all_blocks=1 00:30:21.779 --rc geninfo_unexecuted_blocks=1 00:30:21.779 00:30:21.779 ' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.779 --rc genhtml_branch_coverage=1 00:30:21.779 --rc genhtml_function_coverage=1 00:30:21.779 --rc genhtml_legend=1 00:30:21.779 --rc geninfo_all_blocks=1 00:30:21.779 --rc geninfo_unexecuted_blocks=1 00:30:21.779 00:30:21.779 ' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.779 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.780 10:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:28.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:28.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:28.347 Found net devices under 0000:86:00.0: cvl_0_0 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:28.347 Found net devices under 0000:86:00.1: cvl_0_1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:30:28.347 00:30:28.347 --- 10.0.0.2 ping statistics --- 00:30:28.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.347 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:28.347 00:30:28.347 --- 10.0.0.1 ping statistics --- 00:30:28.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.347 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:28.347 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2858668 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2858668 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2858668 ']' 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.348 10:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 [2024-11-20 10:09:01.018163] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.348 [2024-11-20 10:09:01.019065] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:28.348 [2024-11-20 10:09:01.019099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.348 [2024-11-20 10:09:01.096287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.348 [2024-11-20 10:09:01.139633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.348 [2024-11-20 10:09:01.139674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.348 [2024-11-20 10:09:01.139681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.348 [2024-11-20 10:09:01.139686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.348 [2024-11-20 10:09:01.139692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.348 [2024-11-20 10:09:01.141271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.348 [2024-11-20 10:09:01.141379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.348 [2024-11-20 10:09:01.141488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.348 [2024-11-20 10:09:01.141490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.348 [2024-11-20 10:09:01.141746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 [2024-11-20 10:09:01.279404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.348 [2024-11-20 10:09:01.279824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.348 [2024-11-20 10:09:01.280238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:28.348 [2024-11-20 10:09:01.280353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 [2024-11-20 10:09:01.290173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 Malloc0 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.348 [2024-11-20 10:09:01.362448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2858724 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2858727 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.348 { 00:30:28.348 "params": { 00:30:28.348 "name": "Nvme$subsystem", 00:30:28.348 "trtype": "$TEST_TRANSPORT", 00:30:28.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.348 "adrfam": "ipv4", 00:30:28.348 "trsvcid": "$NVMF_PORT", 00:30:28.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.348 "hdgst": ${hdgst:-false}, 00:30:28.348 "ddgst": ${ddgst:-false} 00:30:28.348 }, 00:30:28.348 "method": "bdev_nvme_attach_controller" 00:30:28.348 } 00:30:28.348 EOF 00:30:28.348 )") 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2858731 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:28.348 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2858734 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.349 { 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme$subsystem", 00:30:28.349 "trtype": "$TEST_TRANSPORT", 00:30:28.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "$NVMF_PORT", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.349 "hdgst": ${hdgst:-false}, 00:30:28.349 "ddgst": ${ddgst:-false} 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 } 00:30:28.349 EOF 00:30:28.349 )") 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.349 { 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme$subsystem", 00:30:28.349 "trtype": "$TEST_TRANSPORT", 00:30:28.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "$NVMF_PORT", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.349 "hdgst": ${hdgst:-false}, 00:30:28.349 "ddgst": ${ddgst:-false} 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 } 00:30:28.349 EOF 00:30:28.349 )") 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.349 { 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme$subsystem", 00:30:28.349 "trtype": "$TEST_TRANSPORT", 00:30:28.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "$NVMF_PORT", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.349 "hdgst": ${hdgst:-false}, 00:30:28.349 "ddgst": ${ddgst:-false} 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 } 00:30:28.349 EOF 00:30:28.349 )") 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2858724 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme1", 00:30:28.349 "trtype": "tcp", 00:30:28.349 "traddr": "10.0.0.2", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "4420", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.349 "hdgst": false, 00:30:28.349 "ddgst": false 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 }' 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme1", 00:30:28.349 "trtype": "tcp", 00:30:28.349 "traddr": "10.0.0.2", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "4420", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.349 "hdgst": false, 00:30:28.349 "ddgst": false 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 }' 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme1", 00:30:28.349 "trtype": "tcp", 00:30:28.349 "traddr": "10.0.0.2", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "4420", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.349 "hdgst": false, 00:30:28.349 "ddgst": false 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 }' 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:28.349 10:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.349 "params": { 00:30:28.349 "name": "Nvme1", 00:30:28.349 "trtype": "tcp", 00:30:28.349 "traddr": "10.0.0.2", 00:30:28.349 "adrfam": "ipv4", 00:30:28.349 "trsvcid": "4420", 00:30:28.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.349 "hdgst": false, 00:30:28.349 "ddgst": false 00:30:28.349 }, 00:30:28.349 "method": "bdev_nvme_attach_controller" 00:30:28.349 }' 00:30:28.349 [2024-11-20 10:09:01.413962] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:28.349 [2024-11-20 10:09:01.414018] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:28.349 [2024-11-20 10:09:01.416966] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:28.349 [2024-11-20 10:09:01.416966] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:28.349 [2024-11-20 10:09:01.416975] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:28.349 [2024-11-20 10:09:01.417023] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 10:09:01.417024] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-11-20 10:09:01.417024] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:28.349 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:28.349 --proc-type=auto ] 00:30:28.349 [2024-11-20 10:09:01.608152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.349 [2024-11-20 10:09:01.653856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.349 [2024-11-20 10:09:01.684325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.349 [2024-11-20 10:09:01.728464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:28.349 [2024-11-20 10:09:01.731920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.349 [2024-11-20 10:09:01.772105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:28.349 [2024-11-20 10:09:01.783352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.349 [2024-11-20 10:09:01.825626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:28.349 Running I/O for 1 seconds... 00:30:28.608 Running I/O for 1 seconds... 00:30:28.608 Running I/O for 1 seconds... 00:30:28.608 Running I/O for 1 seconds... 00:30:29.545 16576.00 IOPS, 64.75 MiB/s 00:30:29.545 Latency(us) 00:30:29.545 [2024-11-20T09:09:03.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.545 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:29.545 Nvme1n1 : 1.01 16640.45 65.00 0.00 0.00 7673.53 3386.03 9050.21 00:30:29.545 [2024-11-20T09:09:03.127Z] =================================================================================================================== 00:30:29.545 [2024-11-20T09:09:03.127Z] Total : 16640.45 65.00 0.00 0.00 7673.53 3386.03 9050.21 00:30:29.545 6785.00 IOPS, 26.50 MiB/s 00:30:29.545 Latency(us) 00:30:29.545 [2024-11-20T09:09:03.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.545 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:29.545 Nvme1n1 : 1.02 6807.30 26.59 0.00 0.00 18674.44 1451.15 27962.03 00:30:29.545 [2024-11-20T09:09:03.127Z] =================================================================================================================== 00:30:29.545 [2024-11-20T09:09:03.127Z] Total : 6807.30 26.59 0.00 0.00 18674.44 1451.15 27962.03 00:30:29.545 254216.00 IOPS, 993.03 MiB/s 00:30:29.545 Latency(us) 00:30:29.545 [2024-11-20T09:09:03.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.545 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:29.545 Nvme1n1 : 1.00 253832.08 991.53 0.00 0.00 501.60 221.38 1490.16 00:30:29.545 [2024-11-20T09:09:03.127Z] =================================================================================================================== 00:30:29.545 [2024-11-20T09:09:03.127Z] Total : 253832.08 991.53 0.00 0.00 501.60 221.38 1490.16 00:30:29.804 7031.00 IOPS, 27.46 MiB/s 00:30:29.804 Latency(us) 00:30:29.804 [2024-11-20T09:09:03.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.804 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:29.804 Nvme1n1 : 1.05 6850.80 26.76 0.00 0.00 17932.88 3978.97 44189.99 00:30:29.804 [2024-11-20T09:09:03.386Z] =================================================================================================================== 00:30:29.804 [2024-11-20T09:09:03.386Z] Total : 6850.80 26.76 0.00 0.00 17932.88 3978.97 44189.99 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2858727 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2858731 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2858734 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.804 rmmod nvme_tcp 00:30:29.804 rmmod nvme_fabrics 00:30:29.804 rmmod nvme_keyring 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2858668 ']' 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2858668 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2858668 ']' 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2858668 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.804 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858668 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858668' 00:30:30.063 killing process with pid 2858668 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2858668 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2858668 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.063 10:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.598 00:30:32.598 real 0m10.762s 00:30:32.598 user 0m15.352s 00:30:32.598 sys 0m6.332s 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:32.598 ************************************ 00:30:32.598 END TEST nvmf_bdev_io_wait 00:30:32.598 ************************************ 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.598 ************************************ 00:30:32.598 START TEST nvmf_queue_depth 00:30:32.598 ************************************ 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:32.598 * Looking for test storage... 00:30:32.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:32.598 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:32.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.599 --rc genhtml_branch_coverage=1 00:30:32.599 --rc genhtml_function_coverage=1 00:30:32.599 --rc genhtml_legend=1 00:30:32.599 --rc geninfo_all_blocks=1 00:30:32.599 --rc geninfo_unexecuted_blocks=1 00:30:32.599 00:30:32.599 ' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:32.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.599 --rc genhtml_branch_coverage=1 00:30:32.599 --rc genhtml_function_coverage=1 00:30:32.599 --rc genhtml_legend=1 00:30:32.599 --rc geninfo_all_blocks=1 00:30:32.599 --rc geninfo_unexecuted_blocks=1 00:30:32.599 00:30:32.599 ' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:32.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.599 --rc genhtml_branch_coverage=1 00:30:32.599 --rc genhtml_function_coverage=1 00:30:32.599 --rc genhtml_legend=1 00:30:32.599 --rc geninfo_all_blocks=1 00:30:32.599 --rc geninfo_unexecuted_blocks=1 00:30:32.599 00:30:32.599 ' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:32.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.599 --rc genhtml_branch_coverage=1 00:30:32.599 --rc genhtml_function_coverage=1 00:30:32.599 --rc genhtml_legend=1 00:30:32.599 --rc geninfo_all_blocks=1 00:30:32.599 --rc geninfo_unexecuted_blocks=1 00:30:32.599 00:30:32.599 ' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.599 10:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.166 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.166 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:39.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:39.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:39.167 Found net devices under 0000:86:00.0: cvl_0_0 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:39.167 Found net devices under 0000:86:00.1: cvl_0_1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.167 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:30:39.168 00:30:39.168 --- 10.0.0.2 ping statistics --- 00:30:39.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.168 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:39.168 00:30:39.168 --- 10.0.0.1 ping statistics --- 00:30:39.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.168 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2863002 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2863002 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2863002 ']' 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.168 10:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.168 [2024-11-20 10:09:11.851122] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.168 [2024-11-20 10:09:11.852013] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:39.168 [2024-11-20 10:09:11.852048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.168 [2024-11-20 10:09:11.934372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.168 [2024-11-20 10:09:11.974330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.168 [2024-11-20 10:09:11.974367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.168 [2024-11-20 10:09:11.974374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.168 [2024-11-20 10:09:11.974380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.168 [2024-11-20 10:09:11.974385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.168 [2024-11-20 10:09:11.974938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.168 [2024-11-20 10:09:12.040614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.168 [2024-11-20 10:09:12.040844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.168 [2024-11-20 10:09:12.715657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.168 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.427 Malloc0 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.427 [2024-11-20 10:09:12.787747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2863162 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2863162 /var/tmp/bdevperf.sock 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2863162 ']' 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:39.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.427 10:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.427 [2024-11-20 10:09:12.837772] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:30:39.427 [2024-11-20 10:09:12.837812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863162 ] 00:30:39.427 [2024-11-20 10:09:12.909193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.427 [2024-11-20 10:09:12.949491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.685 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.685 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:39.685 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.685 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.685 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.944 NVMe0n1 00:30:39.944 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.944 10:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:39.944 Running I/O for 10 seconds... 00:30:42.255 11780.00 IOPS, 46.02 MiB/s [2024-11-20T09:09:16.456Z] 12236.00 IOPS, 47.80 MiB/s [2024-11-20T09:09:17.435Z] 12277.33 IOPS, 47.96 MiB/s [2024-11-20T09:09:18.811Z] 12291.00 IOPS, 48.01 MiB/s [2024-11-20T09:09:19.748Z] 12371.20 IOPS, 48.33 MiB/s [2024-11-20T09:09:20.685Z] 12453.83 IOPS, 48.65 MiB/s [2024-11-20T09:09:21.623Z] 12439.00 IOPS, 48.59 MiB/s [2024-11-20T09:09:22.556Z] 12497.38 IOPS, 48.82 MiB/s [2024-11-20T09:09:23.492Z] 12512.33 IOPS, 48.88 MiB/s [2024-11-20T09:09:23.492Z] 12523.00 IOPS, 48.92 MiB/s 00:30:49.910 Latency(us) 00:30:49.910 [2024-11-20T09:09:23.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.910 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:49.910 Verification LBA range: start 0x0 length 0x4000 00:30:49.910 NVMe0n1 : 10.05 12559.05 49.06 0.00 0.00 81244.61 12670.29 51929.48 00:30:49.910 [2024-11-20T09:09:23.492Z] =================================================================================================================== 00:30:49.910 [2024-11-20T09:09:23.492Z] Total : 12559.05 49.06 0.00 0.00 81244.61 12670.29 51929.48 00:30:49.910 { 00:30:49.910 "results": [ 00:30:49.910 { 00:30:49.910 "job": "NVMe0n1", 00:30:49.910 "core_mask": "0x1", 00:30:49.910 "workload": "verify", 00:30:49.910 "status": "finished", 00:30:49.910 "verify_range": { 00:30:49.910 "start": 0, 00:30:49.910 "length": 16384 00:30:49.910 }, 00:30:49.910 "queue_depth": 1024, 00:30:49.910 "io_size": 4096, 00:30:49.910 "runtime": 10.054982, 00:30:49.910 "iops": 12559.047843148799, 00:30:49.910 "mibps": 49.058780637299996, 00:30:49.910 "io_failed": 0, 00:30:49.910 "io_timeout": 0, 00:30:49.910 "avg_latency_us": 81244.61197464008, 00:30:49.910 "min_latency_us": 12670.293333333333, 00:30:49.910 "max_latency_us": 51929.4780952381 00:30:49.910 } 00:30:49.910 ], 00:30:49.910 "core_count": 1 00:30:49.910 } 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2863162 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2863162 ']' 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2863162 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863162 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863162' 00:30:50.169 killing process with pid 2863162 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2863162 00:30:50.169 Received shutdown signal, test time was about 10.000000 seconds 00:30:50.169 00:30:50.169 Latency(us) 00:30:50.169 [2024-11-20T09:09:23.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.169 [2024-11-20T09:09:23.751Z] =================================================================================================================== 00:30:50.169 [2024-11-20T09:09:23.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2863162 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.169 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.169 rmmod nvme_tcp 00:30:50.169 rmmod nvme_fabrics 00:30:50.428 rmmod nvme_keyring 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2863002 ']' 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2863002 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2863002 ']' 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2863002 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863002 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863002' 00:30:50.428 killing process with pid 2863002 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2863002 00:30:50.428 10:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2863002 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.687 10:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.618 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.618 00:30:52.618 real 0m20.400s 00:30:52.618 user 0m23.047s 00:30:52.618 sys 0m6.342s 00:30:52.618 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.618 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:52.619 ************************************ 00:30:52.619 END TEST nvmf_queue_depth 00:30:52.619 ************************************ 00:30:52.619 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:52.619 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.619 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.619 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.619 ************************************ 00:30:52.619 START TEST nvmf_target_multipath 00:30:52.619 ************************************ 00:30:52.619 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:52.879 * Looking for test storage... 00:30:52.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.879 --rc genhtml_branch_coverage=1 00:30:52.879 --rc genhtml_function_coverage=1 00:30:52.879 --rc genhtml_legend=1 00:30:52.879 --rc geninfo_all_blocks=1 00:30:52.879 --rc geninfo_unexecuted_blocks=1 00:30:52.879 00:30:52.879 ' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.879 --rc genhtml_branch_coverage=1 00:30:52.879 --rc genhtml_function_coverage=1 00:30:52.879 --rc genhtml_legend=1 00:30:52.879 --rc geninfo_all_blocks=1 00:30:52.879 --rc geninfo_unexecuted_blocks=1 00:30:52.879 00:30:52.879 ' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.879 --rc genhtml_branch_coverage=1 00:30:52.879 --rc genhtml_function_coverage=1 00:30:52.879 --rc genhtml_legend=1 00:30:52.879 --rc geninfo_all_blocks=1 00:30:52.879 --rc geninfo_unexecuted_blocks=1 00:30:52.879 00:30:52.879 ' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.879 --rc genhtml_branch_coverage=1 00:30:52.879 --rc genhtml_function_coverage=1 00:30:52.879 --rc genhtml_legend=1 00:30:52.879 --rc geninfo_all_blocks=1 00:30:52.879 --rc geninfo_unexecuted_blocks=1 00:30:52.879 00:30:52.879 ' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.879 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.880 10:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:59.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.449 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:59.450 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.450 10:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:59.450 Found net devices under 0000:86:00.0: cvl_0_0 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:59.450 Found net devices under 0000:86:00.1: cvl_0_1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:30:59.450 00:30:59.450 --- 10.0.0.2 ping statistics --- 00:30:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.450 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:59.450 00:30:59.450 --- 10.0.0.1 ping statistics --- 00:30:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.450 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.450 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:59.451 only one NIC for nvmf test 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.451 rmmod nvme_tcp 00:30:59.451 rmmod nvme_fabrics 00:30:59.451 rmmod nvme_keyring 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.451 10:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.354 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.355 00:31:01.355 real 0m8.296s 00:31:01.355 user 0m1.790s 00:31:01.355 sys 0m4.512s 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:01.355 ************************************ 00:31:01.355 END TEST nvmf_target_multipath 00:31:01.355 ************************************ 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:01.355 ************************************ 00:31:01.355 START TEST nvmf_zcopy 00:31:01.355 ************************************ 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:01.355 * Looking for test storage... 00:31:01.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.355 --rc genhtml_branch_coverage=1 00:31:01.355 --rc genhtml_function_coverage=1 00:31:01.355 --rc genhtml_legend=1 00:31:01.355 --rc geninfo_all_blocks=1 00:31:01.355 --rc geninfo_unexecuted_blocks=1 00:31:01.355 00:31:01.355 ' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.355 --rc genhtml_branch_coverage=1 00:31:01.355 --rc genhtml_function_coverage=1 00:31:01.355 --rc genhtml_legend=1 00:31:01.355 --rc geninfo_all_blocks=1 00:31:01.355 --rc geninfo_unexecuted_blocks=1 00:31:01.355 00:31:01.355 ' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.355 --rc genhtml_branch_coverage=1 00:31:01.355 --rc genhtml_function_coverage=1 00:31:01.355 --rc genhtml_legend=1 00:31:01.355 --rc geninfo_all_blocks=1 00:31:01.355 --rc geninfo_unexecuted_blocks=1 00:31:01.355 00:31:01.355 ' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.355 --rc genhtml_branch_coverage=1 00:31:01.355 --rc genhtml_function_coverage=1 00:31:01.355 --rc genhtml_legend=1 00:31:01.355 --rc geninfo_all_blocks=1 00:31:01.355 --rc geninfo_unexecuted_blocks=1 00:31:01.355 00:31:01.355 ' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.355 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.356 10:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.919 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:07.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:07.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:07.920 Found net devices under 0000:86:00.0: cvl_0_0 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:07.920 Found net devices under 0000:86:00.1: cvl_0_1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:31:07.920 00:31:07.920 --- 10.0.0.2 ping statistics --- 00:31:07.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.920 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:31:07.920 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:31:07.920 00:31:07.921 --- 10.0.0.1 ping statistics --- 00:31:07.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.921 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2871813 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2871813 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2871813 ']' 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.921 10:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:07.921 [2024-11-20 10:09:40.688872] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:07.921 [2024-11-20 10:09:40.689770] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:31:07.921 [2024-11-20 10:09:40.689804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.921 [2024-11-20 10:09:40.768918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.921 [2024-11-20 10:09:40.807473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.921 [2024-11-20 10:09:40.807509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.921 [2024-11-20 10:09:40.807516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.921 [2024-11-20 10:09:40.807521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.921 [2024-11-20 10:09:40.807526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.921 [2024-11-20 10:09:40.808057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.921 [2024-11-20 10:09:40.874962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:07.921 [2024-11-20 10:09:40.875174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 [2024-11-20 10:09:41.564728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 [2024-11-20 10:09:41.588920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 malloc0 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.180 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.180 { 00:31:08.180 "params": { 00:31:08.180 "name": "Nvme$subsystem", 00:31:08.180 "trtype": "$TEST_TRANSPORT", 00:31:08.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.181 "adrfam": "ipv4", 00:31:08.181 "trsvcid": "$NVMF_PORT", 00:31:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.181 "hdgst": ${hdgst:-false}, 00:31:08.181 "ddgst": ${ddgst:-false} 00:31:08.181 }, 00:31:08.181 "method": "bdev_nvme_attach_controller" 00:31:08.181 } 00:31:08.181 EOF 00:31:08.181 )") 00:31:08.181 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:08.181 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:08.181 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:08.181 10:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.181 "params": { 00:31:08.181 "name": "Nvme1", 00:31:08.181 "trtype": "tcp", 00:31:08.181 "traddr": "10.0.0.2", 00:31:08.181 "adrfam": "ipv4", 00:31:08.181 "trsvcid": "4420", 00:31:08.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:08.181 "hdgst": false, 00:31:08.181 "ddgst": false 00:31:08.181 }, 00:31:08.181 "method": "bdev_nvme_attach_controller" 00:31:08.181 }' 00:31:08.181 [2024-11-20 10:09:41.681452] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:31:08.181 [2024-11-20 10:09:41.681492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872061 ] 00:31:08.181 [2024-11-20 10:09:41.756232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.440 [2024-11-20 10:09:41.797078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.698 Running I/O for 10 seconds... 00:31:10.570 8535.00 IOPS, 66.68 MiB/s [2024-11-20T09:09:45.089Z] 8582.00 IOPS, 67.05 MiB/s [2024-11-20T09:09:46.467Z] 8589.33 IOPS, 67.10 MiB/s [2024-11-20T09:09:47.404Z] 8606.50 IOPS, 67.24 MiB/s [2024-11-20T09:09:48.341Z] 8619.60 IOPS, 67.34 MiB/s [2024-11-20T09:09:49.277Z] 8627.33 IOPS, 67.40 MiB/s [2024-11-20T09:09:50.213Z] 8637.29 IOPS, 67.48 MiB/s [2024-11-20T09:09:51.150Z] 8635.38 IOPS, 67.46 MiB/s [2024-11-20T09:09:52.090Z] 8639.78 IOPS, 67.50 MiB/s [2024-11-20T09:09:52.350Z] 8642.70 IOPS, 67.52 MiB/s 00:31:18.768 Latency(us) 00:31:18.768 [2024-11-20T09:09:52.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:18.768 Verification LBA range: start 0x0 length 0x1000 00:31:18.768 Nvme1n1 : 10.05 8611.86 67.28 0.00 0.00 14770.37 2715.06 44189.99 00:31:18.768 [2024-11-20T09:09:52.350Z] =================================================================================================================== 00:31:18.768 [2024-11-20T09:09:52.350Z] Total : 8611.86 67.28 0.00 0.00 14770.37 2715.06 44189.99 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2873662 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:18.768 { 00:31:18.768 "params": { 00:31:18.768 "name": "Nvme$subsystem", 00:31:18.768 "trtype": "$TEST_TRANSPORT", 00:31:18.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.768 "adrfam": "ipv4", 00:31:18.768 "trsvcid": "$NVMF_PORT", 00:31:18.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.768 "hdgst": ${hdgst:-false}, 00:31:18.768 "ddgst": ${ddgst:-false} 00:31:18.768 }, 00:31:18.768 "method": "bdev_nvme_attach_controller" 00:31:18.768 } 00:31:18.768 EOF 00:31:18.768 )") 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:18.768 [2024-11-20 10:09:52.268393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.268429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:18.768 10:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:18.768 "params": { 00:31:18.768 "name": "Nvme1", 00:31:18.768 "trtype": "tcp", 00:31:18.768 "traddr": "10.0.0.2", 00:31:18.768 "adrfam": "ipv4", 00:31:18.768 "trsvcid": "4420", 00:31:18.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:18.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:18.768 "hdgst": false, 00:31:18.768 "ddgst": false 00:31:18.768 }, 00:31:18.768 "method": "bdev_nvme_attach_controller" 00:31:18.768 }' 00:31:18.768 [2024-11-20 10:09:52.280361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.280373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 [2024-11-20 10:09:52.292350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.292359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 [2024-11-20 10:09:52.304194] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:31:18.768 [2024-11-20 10:09:52.304241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873662 ] 00:31:18.768 [2024-11-20 10:09:52.304353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.304363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 [2024-11-20 10:09:52.316351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.316360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 [2024-11-20 10:09:52.328349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.328359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:18.768 [2024-11-20 10:09:52.340352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:18.768 [2024-11-20 10:09:52.340362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.352351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.352360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.364351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.364362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.376353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.376363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.377771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.028 [2024-11-20 10:09:52.388354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.388367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.400353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.400364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.412351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.412361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.419726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.028 [2024-11-20 10:09:52.424349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.424361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.436365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.436387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.448364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.448386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.028 [2024-11-20 10:09:52.460354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.028 [2024-11-20 10:09:52.460369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.472355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.472368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.484354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.484368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.496351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.496361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.508367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.508389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.520357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.520372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.532358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.532373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.544353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.544363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.556353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.556363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.568351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.568362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.580357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.580372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.592357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.592371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.029 [2024-11-20 10:09:52.604403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.029 [2024-11-20 10:09:52.604422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 Running I/O for 5 seconds... 00:31:19.289 [2024-11-20 10:09:52.621254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.621275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.636327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.636346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.650360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.650379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.665509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.665528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.680360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.680388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.692397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.692417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.706035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.706054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.720857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.720877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.735704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.735723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.749906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.749925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.764386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.764405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.778193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.778221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.793249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.793268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.808278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.808298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.821224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.821243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.836354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.836372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.849136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.849154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.289 [2024-11-20 10:09:52.861933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.289 [2024-11-20 10:09:52.861952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.876742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.876759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.888230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.888248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.901962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.901980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.917003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.917021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.932069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.932087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.946229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.946247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.961052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.961069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.976950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.976967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:52.992408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:52.992426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:53.004962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:53.004979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:53.018092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:53.018110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:53.032969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:53.032987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:53.047798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.548 [2024-11-20 10:09:53.047815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.548 [2024-11-20 10:09:53.061896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.549 [2024-11-20 10:09:53.061914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.549 [2024-11-20 10:09:53.076686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.549 [2024-11-20 10:09:53.076704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.549 [2024-11-20 10:09:53.090104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.549 [2024-11-20 10:09:53.090122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.549 [2024-11-20 10:09:53.104938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.549 [2024-11-20 10:09:53.104955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.549 [2024-11-20 10:09:53.120679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.549 [2024-11-20 10:09:53.120698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.134262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.134280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.149172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.149190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.164400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.164418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.178519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.178537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.193253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.193271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.209057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.209075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.220235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.220270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.234118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.234138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.248945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.248963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.264247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.264265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.277986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.278003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.292939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.292958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.307862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.307882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.322399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.322418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.336934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.336951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.351657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.351675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.366415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.366432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:19.808 [2024-11-20 10:09:53.381913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:19.808 [2024-11-20 10:09:53.381932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.396335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.396354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.407982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.408000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.422066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.422084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.437099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.437117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.452574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.452593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.465806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.465824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.480480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.480498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.493419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.493437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.508768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.508786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.524803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.524821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.540368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.540386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.554054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.554072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.568449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.568477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.582267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.582284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.597138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.597156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.611934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.611953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 16682.00 IOPS, 130.33 MiB/s [2024-11-20T09:09:53.650Z] [2024-11-20 10:09:53.625707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.625726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.068 [2024-11-20 10:09:53.641033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.068 [2024-11-20 10:09:53.641052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.652219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.652240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.666194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.666218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.681107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.681125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.692587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.692605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.705675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.705693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.716711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.716728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.730105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.730123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.744999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.745022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.760266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.760285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.774036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.774054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.788872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.788889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.804032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.804050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.818908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.327 [2024-11-20 10:09:53.818927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.327 [2024-11-20 10:09:53.833768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.833785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.328 [2024-11-20 10:09:53.844936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.844953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.328 [2024-11-20 10:09:53.858324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.858343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.328 [2024-11-20 10:09:53.873014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.873033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.328 [2024-11-20 10:09:53.889026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.889046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.328 [2024-11-20 10:09:53.900786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.328 [2024-11-20 10:09:53.900804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.587 [2024-11-20 10:09:53.916184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.587 [2024-11-20 10:09:53.916210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.587 [2024-11-20 10:09:53.929729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.587 [2024-11-20 10:09:53.929749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.587 [2024-11-20 10:09:53.944677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:53.944695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:53.960656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:53.960674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:53.976723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:53.976741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:53.991384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:53.991402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.005401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.005420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.020313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.020336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.033492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.033510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.044350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.044369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.058477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.058496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.072992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.073011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.088309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.088327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.101582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.101601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.116396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.116414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.129907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.129927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.144462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.144481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.588 [2024-11-20 10:09:54.158399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.588 [2024-11-20 10:09:54.158419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.847 [2024-11-20 10:09:54.173019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.847 [2024-11-20 10:09:54.173039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.847 [2024-11-20 10:09:54.188176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.847 [2024-11-20 10:09:54.188194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.847 [2024-11-20 10:09:54.201182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.847 [2024-11-20 10:09:54.201206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.847 [2024-11-20 10:09:54.213983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.214003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.229311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.229329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.244702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.244720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.260152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.260171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.274143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.274162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.289377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.289401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.304065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.304084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.317382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.317401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.332594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.332613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.343877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.343899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.358055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.358072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.372595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.372615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.385567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.385585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.396620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.396637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.409948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.409966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.848 [2024-11-20 10:09:54.420185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:20.848 [2024-11-20 10:09:54.420208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.434375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.434393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.449574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.449592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.464405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.464423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.477390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.477409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.491651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.491669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.505891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.505909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.520342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.520360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.532228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.532246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.546290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.546313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.561138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.561155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.576468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.576486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.589306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.589324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.601708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.601726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.616821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.616839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 16741.50 IOPS, 130.79 MiB/s [2024-11-20T09:09:54.689Z] [2024-11-20 10:09:54.632518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.632538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.645947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.645967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.660954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.660973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.107 [2024-11-20 10:09:54.671592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.107 [2024-11-20 10:09:54.671611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.686498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.686517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.700920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.700938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.715776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.715794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.730752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.730771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.744951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.744969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.760017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.760035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.774090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.774108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.788995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.789012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.803763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.803781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.818496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.818515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.833277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.833294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.844585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.844602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.858455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.858473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.873271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.873288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.888497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.888516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.898943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.898960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.913518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.913536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.928497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.928515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.367 [2024-11-20 10:09:54.942167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.367 [2024-11-20 10:09:54.942185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:54.957108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:54.957126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:54.973043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:54.973061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:54.988038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:54.988057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.002475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.002494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.016873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.016890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.030329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.030347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.045248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.045266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.060582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.060600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.071894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.071912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.086353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.086371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.627 [2024-11-20 10:09:55.101277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.627 [2024-11-20 10:09:55.101295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.116039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.116057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.130249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.130268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.145148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.145165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.160114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.160132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.173945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.173963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.188854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.188871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.628 [2024-11-20 10:09:55.204374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.628 [2024-11-20 10:09:55.204393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.218549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.218567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.233465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.233483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.248891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.248908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.263651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.263670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.278635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.278654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.293000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.293017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.308280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.308300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.322058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.322078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.336601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.336619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.348073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.348097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.362405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.362425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.376975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.376995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.392923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.392943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.408142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.408161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.422684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.422703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.437436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.437454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.452122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.452141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:21.887 [2024-11-20 10:09:55.465471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:21.887 [2024-11-20 10:09:55.465490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.480139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.480158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.494289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.494308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.508669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.508687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.524084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.524106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.538233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.538252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.552781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.552799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.568451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.568472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.580822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.580840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.594508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.594527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.609467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.609486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 16757.33 IOPS, 130.92 MiB/s [2024-11-20T09:09:55.729Z] [2024-11-20 10:09:55.623900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.623928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.637168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.637187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.651956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.147 [2024-11-20 10:09:55.651974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.147 [2024-11-20 10:09:55.665658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.148 [2024-11-20 10:09:55.665677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.148 [2024-11-20 10:09:55.676148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.148 [2024-11-20 10:09:55.676166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.148 [2024-11-20 10:09:55.690559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.148 [2024-11-20 10:09:55.690579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.148 [2024-11-20 10:09:55.705467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.148 [2024-11-20 10:09:55.705485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.148 [2024-11-20 10:09:55.719580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.148 [2024-11-20 10:09:55.719599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.734273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.734292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.749275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.749294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.764000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.764018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.778419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.778436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.793408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.793426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.807922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.807941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.822583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.822601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.837129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.837146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.852088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.852106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.866673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.866691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.881371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.881388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.896290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.896314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.910149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.910166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.924893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.924911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.407 [2024-11-20 10:09:55.940716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.407 [2024-11-20 10:09:55.940734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.408 [2024-11-20 10:09:55.954376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.408 [2024-11-20 10:09:55.954394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.408 [2024-11-20 10:09:55.969240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.408 [2024-11-20 10:09:55.969258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.408 [2024-11-20 10:09:55.984285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.408 [2024-11-20 10:09:55.984304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:55.998522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:55.998540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.013296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.013313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.028449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.028477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.039413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.039431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.054007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.054025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.068463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.068481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.080772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.080788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.093912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.093930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.108685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.108702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.124240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.124258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.137143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.137161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.667 [2024-11-20 10:09:56.152362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.667 [2024-11-20 10:09:56.152381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.164463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.164485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.178446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.178465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.193045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.193062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.203607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.203625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.218216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.218234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.232528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.232546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.668 [2024-11-20 10:09:56.243140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.668 [2024-11-20 10:09:56.243158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.257764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.257782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.272247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.272265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.283582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.283600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.297685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.297703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.308970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.308988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.322420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.322439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.337067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.337085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.349278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.349295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.361753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.361770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.372773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.372790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.386272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.386293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.400836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.400853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.416444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.416472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.429306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.429324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.444765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.444782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.460491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.460509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.927 [2024-11-20 10:09:56.474586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.927 [2024-11-20 10:09:56.474604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.928 [2024-11-20 10:09:56.489186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.928 [2024-11-20 10:09:56.489210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:22.928 [2024-11-20 10:09:56.505111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:22.928 [2024-11-20 10:09:56.505129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.519969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.519987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.533682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.533701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.549044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.549061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.564048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.564066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.577542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.577560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.592173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.592191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.605633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.605650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.616731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.616748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 16758.50 IOPS, 130.93 MiB/s [2024-11-20T09:09:56.769Z] [2024-11-20 10:09:56.630571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.630589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.645350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.645368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.660897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.660915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.676787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.676812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.688969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.688987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.701613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.701631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.712649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.712666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.726021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.726038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.740884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.740902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.187 [2024-11-20 10:09:56.756395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.187 [2024-11-20 10:09:56.756414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.770582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.770602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.785813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.785832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.800524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.800543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.811904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.811924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.826105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.826125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.841271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.841289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.856050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.856069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.869780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.869799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.884899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.884918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.900810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.900828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.916307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.916325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.928027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.928046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.941888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.941911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.956778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.956797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.972765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.972784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.447 [2024-11-20 10:09:56.988452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.447 [2024-11-20 10:09:56.988471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.448 [2024-11-20 10:09:57.000847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.448 [2024-11-20 10:09:57.000865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.448 [2024-11-20 10:09:57.014175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.448 [2024-11-20 10:09:57.014194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.029053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.029072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.043845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.043863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.058160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.058179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.073256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.073276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.088193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.088218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.102187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.102211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.116461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.116479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.129104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.129122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.143768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.143787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.158360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.158378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.173578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.173596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.188300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.188318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.202345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.202364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.217674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.217697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.232446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.232464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.246147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.246165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.260936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.260954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.707 [2024-11-20 10:09:57.273256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.707 [2024-11-20 10:09:57.273273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.287829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.287847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.301535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.301553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.316302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.316321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.327476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.327493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.342373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.342391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.357515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.357534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.372445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.372463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.385594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.385612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.400717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.400735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.416316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.416336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.429150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.429168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.441779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.441796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.456563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.456581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.470241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.470259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.485173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.485194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.499918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.499936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.513799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.513817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.528784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.528802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:23.967 [2024-11-20 10:09:57.539887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:23.967 [2024-11-20 10:09:57.539906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.227 [2024-11-20 10:09:57.554274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.227 [2024-11-20 10:09:57.554293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.227 [2024-11-20 10:09:57.569089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.227 [2024-11-20 10:09:57.569106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.584284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.584303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.598187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.598211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.613113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.613131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 16753.00 IOPS, 130.88 MiB/s [2024-11-20T09:09:57.810Z] [2024-11-20 10:09:57.627330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.627350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 00:31:24.228 Latency(us) 00:31:24.228 [2024-11-20T09:09:57.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.228 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:24.228 Nvme1n1 : 5.01 16754.73 130.90 0.00 0.00 7632.23 2044.10 12795.12 00:31:24.228 [2024-11-20T09:09:57.810Z] =================================================================================================================== 00:31:24.228 [2024-11-20T09:09:57.810Z] Total : 16754.73 130.90 0.00 0.00 7632.23 2044.10 12795.12 00:31:24.228 [2024-11-20 10:09:57.636358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.636375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.648356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.648371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.660369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.660387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.672361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.672377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.684359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.684372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.696354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.696368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.708356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.708370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.720356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.720369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.732354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.732368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.744350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.744359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.756355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.756366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.768355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.768365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 [2024-11-20 10:09:57.780351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:24.228 [2024-11-20 10:09:57.780360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:24.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2873662) - No such process 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2873662 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.228 delay0 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:24.228 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.487 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.487 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.487 10:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:24.487 [2024-11-20 10:09:57.924626] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:32.605 [2024-11-20 10:10:04.720083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91dc80 is same with the state(6) to be set 00:31:32.605 Initializing NVMe Controllers 00:31:32.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.605 Initialization complete. Launching workers. 00:31:32.605 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 291, failed: 11448 00:31:32.605 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11657, failed to submit 82 00:31:32.605 success 11541, unsuccessful 116, failed 0 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.605 rmmod nvme_tcp 00:31:32.605 rmmod nvme_fabrics 00:31:32.605 rmmod nvme_keyring 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2871813 ']' 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2871813 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2871813 ']' 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2871813 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2871813 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2871813' 00:31:32.605 killing process with pid 2871813 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2871813 00:31:32.605 10:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2871813 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.605 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.606 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.606 10:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.542 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.542 00:31:33.542 real 0m32.545s 00:31:33.542 user 0m41.331s 00:31:33.542 sys 0m12.836s 00:31:33.542 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.542 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.542 ************************************ 00:31:33.543 END TEST nvmf_zcopy 00:31:33.543 ************************************ 00:31:33.543 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:33.543 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:33.543 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.543 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.802 ************************************ 00:31:33.802 START TEST nvmf_nmic 00:31:33.802 ************************************ 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:33.802 * Looking for test storage... 00:31:33.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:33.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.802 --rc genhtml_branch_coverage=1 00:31:33.802 --rc genhtml_function_coverage=1 00:31:33.802 --rc genhtml_legend=1 00:31:33.802 --rc geninfo_all_blocks=1 00:31:33.802 --rc geninfo_unexecuted_blocks=1 00:31:33.802 00:31:33.802 ' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:33.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.802 --rc genhtml_branch_coverage=1 00:31:33.802 --rc genhtml_function_coverage=1 00:31:33.802 --rc genhtml_legend=1 00:31:33.802 --rc geninfo_all_blocks=1 00:31:33.802 --rc geninfo_unexecuted_blocks=1 00:31:33.802 00:31:33.802 ' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:33.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.802 --rc genhtml_branch_coverage=1 00:31:33.802 --rc genhtml_function_coverage=1 00:31:33.802 --rc genhtml_legend=1 00:31:33.802 --rc geninfo_all_blocks=1 00:31:33.802 --rc geninfo_unexecuted_blocks=1 00:31:33.802 00:31:33.802 ' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:33.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.802 --rc genhtml_branch_coverage=1 00:31:33.802 --rc genhtml_function_coverage=1 00:31:33.802 --rc genhtml_legend=1 00:31:33.802 --rc geninfo_all_blocks=1 00:31:33.802 --rc geninfo_unexecuted_blocks=1 00:31:33.802 00:31:33.802 ' 00:31:33.802 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.803 10:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:40.380 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:40.380 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:40.380 Found net devices under 0000:86:00.0: cvl_0_0 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.380 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:40.381 Found net devices under 0000:86:00.1: cvl_0_1 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.381 10:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:31:40.381 00:31:40.381 --- 10.0.0.2 ping statistics --- 00:31:40.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.381 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:40.381 00:31:40.381 --- 10.0.0.1 ping statistics --- 00:31:40.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.381 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2879228 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2879228 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2879228 ']' 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.381 10:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.381 [2024-11-20 10:10:13.295513] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.381 [2024-11-20 10:10:13.296546] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:31:40.381 [2024-11-20 10:10:13.296589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.381 [2024-11-20 10:10:13.375790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.381 [2024-11-20 10:10:13.417473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.381 [2024-11-20 10:10:13.417512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.381 [2024-11-20 10:10:13.417521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.381 [2024-11-20 10:10:13.417528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.381 [2024-11-20 10:10:13.417534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.381 [2024-11-20 10:10:13.419119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.381 [2024-11-20 10:10:13.419243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.381 [2024-11-20 10:10:13.419294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.381 [2024-11-20 10:10:13.419295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.381 [2024-11-20 10:10:13.486769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.381 [2024-11-20 10:10:13.487258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.381 [2024-11-20 10:10:13.487720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:40.381 [2024-11-20 10:10:13.488014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.381 [2024-11-20 10:10:13.488070] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.641 [2024-11-20 10:10:14.168071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.641 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.903 Malloc0 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.903 [2024-11-20 10:10:14.248161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:40.903 test case1: single bdev can't be used in multiple subsystems 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.903 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.904 [2024-11-20 10:10:14.271812] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:40.904 [2024-11-20 10:10:14.271834] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:40.904 [2024-11-20 10:10:14.271845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:40.904 request: 00:31:40.904 { 00:31:40.904 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:40.904 "namespace": { 00:31:40.904 "bdev_name": "Malloc0", 00:31:40.904 "no_auto_visible": false 00:31:40.904 }, 00:31:40.904 "method": "nvmf_subsystem_add_ns", 00:31:40.904 "req_id": 1 00:31:40.904 } 00:31:40.904 Got JSON-RPC error response 00:31:40.904 response: 00:31:40.904 { 00:31:40.904 "code": -32602, 00:31:40.904 "message": "Invalid parameters" 00:31:40.904 } 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:40.904 Adding namespace failed - expected result. 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:40.904 test case2: host connect to nvmf target in multiple paths 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:40.904 [2024-11-20 10:10:14.283903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.904 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:41.212 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:41.486 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:41.486 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:41.486 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:41.486 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:41.486 10:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:43.420 10:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:43.420 [global] 00:31:43.420 thread=1 00:31:43.420 invalidate=1 00:31:43.420 rw=write 00:31:43.420 time_based=1 00:31:43.420 runtime=1 00:31:43.420 ioengine=libaio 00:31:43.420 direct=1 00:31:43.420 bs=4096 00:31:43.420 iodepth=1 00:31:43.420 norandommap=0 00:31:43.420 numjobs=1 00:31:43.420 00:31:43.420 verify_dump=1 00:31:43.420 verify_backlog=512 00:31:43.420 verify_state_save=0 00:31:43.420 do_verify=1 00:31:43.420 verify=crc32c-intel 00:31:43.420 [job0] 00:31:43.420 filename=/dev/nvme0n1 00:31:43.420 Could not set queue depth (nvme0n1) 00:31:43.678 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.678 fio-3.35 00:31:43.678 Starting 1 thread 00:31:45.054 00:31:45.054 job0: (groupid=0, jobs=1): err= 0: pid=2879865: Wed Nov 20 10:10:18 2024 00:31:45.054 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:31:45.054 slat (nsec): min=9234, max=25031, avg=22609.41, stdev=3121.89 00:31:45.054 clat (usec): min=40595, max=41179, avg=40955.02, stdev=115.13 00:31:45.055 lat (usec): min=40604, max=41202, avg=40977.63, stdev=117.26 00:31:45.055 clat percentiles (usec): 00:31:45.055 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:45.055 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:45.055 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:45.055 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:45.055 | 99.99th=[41157] 00:31:45.055 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:31:45.055 slat (usec): min=10, max=28108, avg=67.13, stdev=1241.70 00:31:45.055 clat (usec): min=129, max=255, avg=146.43, stdev=25.43 00:31:45.055 lat (usec): min=141, max=28320, avg=213.55, stdev=1244.84 00:31:45.055 clat percentiles (usec): 00:31:45.055 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:31:45.055 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:31:45.055 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 239], 00:31:45.055 | 99.00th=[ 251], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:31:45.055 | 99.99th=[ 255] 00:31:45.055 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:45.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:45.055 lat (usec) : 250=94.76%, 500=1.12% 00:31:45.055 lat (msec) : 50=4.12% 00:31:45.055 cpu : usr=0.30%, sys=1.09%, ctx=536, majf=0, minf=1 00:31:45.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.055 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.055 00:31:45.055 Run status group 0 (all jobs): 00:31:45.055 READ: bw=86.9KiB/s (89.0kB/s), 86.9KiB/s-86.9KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1013-1013msec 00:31:45.055 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:31:45.055 00:31:45.055 Disk stats (read/write): 00:31:45.055 nvme0n1: ios=45/512, merge=0/0, ticks=1764/71, in_queue=1835, util=98.50% 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:45.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.055 rmmod nvme_tcp 00:31:45.055 rmmod nvme_fabrics 00:31:45.055 rmmod nvme_keyring 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2879228 ']' 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2879228 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2879228 ']' 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2879228 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2879228 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2879228' 00:31:45.055 killing process with pid 2879228 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2879228 00:31:45.055 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2879228 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.314 10:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.219 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.219 00:31:47.220 real 0m13.649s 00:31:47.220 user 0m24.110s 00:31:47.220 sys 0m5.936s 00:31:47.220 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.220 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:47.220 ************************************ 00:31:47.220 END TEST nvmf_nmic 00:31:47.220 ************************************ 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.480 ************************************ 00:31:47.480 START TEST nvmf_fio_target 00:31:47.480 ************************************ 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:47.480 * Looking for test storage... 00:31:47.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.480 10:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.480 --rc genhtml_branch_coverage=1 00:31:47.480 --rc genhtml_function_coverage=1 00:31:47.480 --rc genhtml_legend=1 00:31:47.480 --rc geninfo_all_blocks=1 00:31:47.480 --rc geninfo_unexecuted_blocks=1 00:31:47.480 00:31:47.480 ' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.480 --rc genhtml_branch_coverage=1 00:31:47.480 --rc genhtml_function_coverage=1 00:31:47.480 --rc genhtml_legend=1 00:31:47.480 --rc geninfo_all_blocks=1 00:31:47.480 --rc geninfo_unexecuted_blocks=1 00:31:47.480 00:31:47.480 ' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.480 --rc genhtml_branch_coverage=1 00:31:47.480 --rc genhtml_function_coverage=1 00:31:47.480 --rc genhtml_legend=1 00:31:47.480 --rc geninfo_all_blocks=1 00:31:47.480 --rc geninfo_unexecuted_blocks=1 00:31:47.480 00:31:47.480 ' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.480 --rc genhtml_branch_coverage=1 00:31:47.480 --rc genhtml_function_coverage=1 00:31:47.480 --rc genhtml_legend=1 00:31:47.480 --rc geninfo_all_blocks=1 00:31:47.480 --rc geninfo_unexecuted_blocks=1 00:31:47.480 00:31:47.480 ' 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.480 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.741 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.742 10:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.312 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:54.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:54.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:54.313 Found net devices under 0000:86:00.0: cvl_0_0 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:54.313 Found net devices under 0000:86:00.1: cvl_0_1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:31:54.313 00:31:54.313 --- 10.0.0.2 ping statistics --- 00:31:54.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.313 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:31:54.313 00:31:54.313 --- 10.0.0.1 ping statistics --- 00:31:54.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.313 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.313 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2883614 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2883614 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2883614 ']' 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.314 10:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.314 [2024-11-20 10:10:26.967468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.314 [2024-11-20 10:10:26.968428] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:31:54.314 [2024-11-20 10:10:26.968465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.314 [2024-11-20 10:10:27.049209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.314 [2024-11-20 10:10:27.091475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.314 [2024-11-20 10:10:27.091511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.314 [2024-11-20 10:10:27.091521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.314 [2024-11-20 10:10:27.091528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.314 [2024-11-20 10:10:27.091534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.314 [2024-11-20 10:10:27.093199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.314 [2024-11-20 10:10:27.093227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.314 [2024-11-20 10:10:27.093337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.314 [2024-11-20 10:10:27.093338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.314 [2024-11-20 10:10:27.161266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.314 [2024-11-20 10:10:27.161375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.314 [2024-11-20 10:10:27.162221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:54.314 [2024-11-20 10:10:27.162491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.314 [2024-11-20 10:10:27.162540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.314 10:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:54.574 [2024-11-20 10:10:28.010131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.574 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:54.833 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:54.833 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.093 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:55.093 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.353 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:55.353 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.353 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:55.353 10:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:55.612 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.870 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:55.870 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:56.129 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:56.129 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:56.387 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:56.387 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:56.387 10:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:56.646 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:56.646 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.904 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:56.904 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:56.905 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.163 [2024-11-20 10:10:30.630070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.163 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:57.421 10:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:57.680 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:57.938 10:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:59.839 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:59.840 10:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:59.840 [global] 00:31:59.840 thread=1 00:31:59.840 invalidate=1 00:31:59.840 rw=write 00:31:59.840 time_based=1 00:31:59.840 runtime=1 00:31:59.840 ioengine=libaio 00:31:59.840 direct=1 00:31:59.840 bs=4096 00:31:59.840 iodepth=1 00:31:59.840 norandommap=0 00:31:59.840 numjobs=1 00:31:59.840 00:31:59.840 verify_dump=1 00:31:59.840 verify_backlog=512 00:31:59.840 verify_state_save=0 00:31:59.840 do_verify=1 00:31:59.840 verify=crc32c-intel 00:31:59.840 [job0] 00:31:59.840 filename=/dev/nvme0n1 00:31:59.840 [job1] 00:31:59.840 filename=/dev/nvme0n2 00:31:59.840 [job2] 00:31:59.840 filename=/dev/nvme0n3 00:31:59.840 [job3] 00:31:59.840 filename=/dev/nvme0n4 00:32:00.109 Could not set queue depth (nvme0n1) 00:32:00.109 Could not set queue depth (nvme0n2) 00:32:00.109 Could not set queue depth (nvme0n3) 00:32:00.109 Could not set queue depth (nvme0n4) 00:32:00.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:00.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:00.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:00.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:00.370 fio-3.35 00:32:00.370 Starting 4 threads 00:32:01.758 00:32:01.758 job0: (groupid=0, jobs=1): err= 0: pid=2884951: Wed Nov 20 10:10:35 2024 00:32:01.758 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:32:01.758 slat (nsec): min=9547, max=23531, avg=22106.27, stdev=3045.29 00:32:01.758 clat (usec): min=40872, max=41111, avg=40960.35, stdev=61.20 00:32:01.758 lat (usec): min=40895, max=41134, avg=40982.46, stdev=61.86 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:01.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:01.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:01.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:01.758 | 99.99th=[41157] 00:32:01.758 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:32:01.758 slat (usec): min=9, max=17288, avg=44.48, stdev=763.59 00:32:01.758 clat (usec): min=137, max=368, avg=160.34, stdev=18.15 00:32:01.758 lat (usec): min=147, max=17657, avg=204.82, stdev=772.98 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:32:01.758 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:32:01.758 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 186], 00:32:01.758 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 371], 99.95th=[ 371], 00:32:01.758 | 99.99th=[ 371] 00:32:01.758 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:01.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:01.758 lat (usec) : 250=95.69%, 500=0.19% 00:32:01.758 lat (msec) : 50=4.12% 00:32:01.758 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:32:01.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.758 job1: (groupid=0, jobs=1): err= 0: pid=2884959: Wed Nov 20 10:10:35 2024 00:32:01.758 read: IOPS=1483, BW=5933KiB/s (6075kB/s)(6176KiB/1041msec) 00:32:01.758 slat (nsec): min=7208, max=44380, avg=8295.84, stdev=1867.90 00:32:01.758 clat (usec): min=183, max=41030, avg=402.52, stdev=2733.85 00:32:01.758 lat (usec): min=190, max=41052, avg=410.81, stdev=2734.57 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:32:01.758 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 215], 00:32:01.758 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:32:01.758 | 99.00th=[ 265], 99.50th=[ 392], 99.90th=[41157], 99.95th=[41157], 00:32:01.758 | 99.99th=[41157] 00:32:01.758 write: IOPS=1967, BW=7869KiB/s (8058kB/s)(8192KiB/1041msec); 0 zone resets 00:32:01.758 slat (usec): min=10, max=40621, avg=40.24, stdev=974.78 00:32:01.758 clat (usec): min=121, max=273, avg=152.94, stdev=33.93 00:32:01.758 lat (usec): min=132, max=40843, avg=193.19, stdev=977.72 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 133], 00:32:01.758 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:32:01.758 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 243], 00:32:01.758 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 265], 00:32:01.758 | 99.99th=[ 273] 00:32:01.758 bw ( KiB/s): min= 6000, max=10384, per=59.49%, avg=8192.00, stdev=3099.96, samples=2 00:32:01.758 iops : min= 1500, max= 2596, avg=2048.00, stdev=774.99, samples=2 00:32:01.758 lat (usec) : 250=95.55%, 500=4.26% 00:32:01.758 lat (msec) : 50=0.19% 00:32:01.758 cpu : usr=2.60%, sys=5.77%, ctx=3595, majf=0, minf=1 00:32:01.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 issued rwts: total=1544,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.758 job2: (groupid=0, jobs=1): err= 0: pid=2884960: Wed Nov 20 10:10:35 2024 00:32:01.758 read: IOPS=24, BW=99.8KiB/s (102kB/s)(100KiB/1002msec) 00:32:01.758 slat (nsec): min=10250, max=27852, avg=20617.68, stdev=4517.22 00:32:01.758 clat (usec): min=238, max=41960, avg=36076.37, stdev=13481.23 00:32:01.758 lat (usec): min=261, max=41988, avg=36096.99, stdev=13480.62 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 239], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[40633], 00:32:01.758 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:01.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:01.758 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:01.758 | 99.99th=[42206] 00:32:01.758 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:32:01.758 slat (nsec): min=10622, max=40008, avg=12031.35, stdev=1956.62 00:32:01.758 clat (usec): min=149, max=310, avg=178.85, stdev=22.48 00:32:01.758 lat (usec): min=161, max=350, avg=190.89, stdev=22.84 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:32:01.758 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:32:01.758 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 241], 00:32:01.758 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 310], 00:32:01.758 | 99.99th=[ 310] 00:32:01.758 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:01.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:01.758 lat (usec) : 250=94.04%, 500=1.86% 00:32:01.758 lat (msec) : 50=4.10% 00:32:01.758 cpu : usr=0.60%, sys=0.80%, ctx=538, majf=0, minf=1 00:32:01.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.758 job3: (groupid=0, jobs=1): err= 0: pid=2884961: Wed Nov 20 10:10:35 2024 00:32:01.758 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:32:01.758 slat (nsec): min=10510, max=22681, avg=21223.39, stdev=2814.52 00:32:01.758 clat (usec): min=7008, max=41303, avg=39505.28, stdev=7084.49 00:32:01.758 lat (usec): min=7030, max=41314, avg=39526.50, stdev=7084.32 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 6980], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:01.758 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:01.758 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:01.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:01.758 | 99.99th=[41157] 00:32:01.758 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:32:01.758 slat (nsec): min=10479, max=35976, avg=11874.83, stdev=1847.42 00:32:01.758 clat (usec): min=153, max=312, avg=177.69, stdev=16.74 00:32:01.758 lat (usec): min=164, max=348, avg=189.57, stdev=17.35 00:32:01.758 clat percentiles (usec): 00:32:01.758 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:32:01.758 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:32:01.758 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:32:01.758 | 99.00th=[ 231], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 314], 00:32:01.758 | 99.99th=[ 314] 00:32:01.758 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:32:01.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:01.758 lat (usec) : 250=95.14%, 500=0.56% 00:32:01.758 lat (msec) : 10=0.19%, 50=4.11% 00:32:01.758 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=2 00:32:01.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:01.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:01.758 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:01.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:01.758 00:32:01.758 Run status group 0 (all jobs): 00:32:01.758 READ: bw=6202KiB/s (6351kB/s), 87.3KiB/s-5933KiB/s (89.4kB/s-6075kB/s), io=6456KiB (6611kB), run=1002-1041msec 00:32:01.758 WRITE: bw=13.4MiB/s (14.1MB/s), 2032KiB/s-7869KiB/s (2081kB/s-8058kB/s), io=14.0MiB (14.7MB), run=1002-1041msec 00:32:01.758 00:32:01.758 Disk stats (read/write): 00:32:01.758 nvme0n1: ios=68/512, merge=0/0, ticks=1446/78, in_queue=1524, util=87.37% 00:32:01.758 nvme0n2: ios=1561/2048, merge=0/0, ticks=1265/281, in_queue=1546, util=91.35% 00:32:01.758 nvme0n3: ios=77/512, merge=0/0, ticks=766/87, in_queue=853, util=89.69% 00:32:01.758 nvme0n4: ios=75/512, merge=0/0, ticks=773/88, in_queue=861, util=94.03% 00:32:01.759 10:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:01.759 [global] 00:32:01.759 thread=1 00:32:01.759 invalidate=1 00:32:01.759 rw=randwrite 00:32:01.759 time_based=1 00:32:01.759 runtime=1 00:32:01.759 ioengine=libaio 00:32:01.759 direct=1 00:32:01.759 bs=4096 00:32:01.759 iodepth=1 00:32:01.759 norandommap=0 00:32:01.759 numjobs=1 00:32:01.759 00:32:01.759 verify_dump=1 00:32:01.759 verify_backlog=512 00:32:01.759 verify_state_save=0 00:32:01.759 do_verify=1 00:32:01.759 verify=crc32c-intel 00:32:01.759 [job0] 00:32:01.759 filename=/dev/nvme0n1 00:32:01.759 [job1] 00:32:01.759 filename=/dev/nvme0n2 00:32:01.759 [job2] 00:32:01.759 filename=/dev/nvme0n3 00:32:01.759 [job3] 00:32:01.759 filename=/dev/nvme0n4 00:32:01.759 Could not set queue depth (nvme0n1) 00:32:01.759 Could not set queue depth (nvme0n2) 00:32:01.759 Could not set queue depth (nvme0n3) 00:32:01.759 Could not set queue depth (nvme0n4) 00:32:02.018 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.018 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.018 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.018 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:02.018 fio-3.35 00:32:02.018 Starting 4 threads 00:32:03.387 00:32:03.387 job0: (groupid=0, jobs=1): err= 0: pid=2885331: Wed Nov 20 10:10:36 2024 00:32:03.387 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:32:03.387 slat (nsec): min=6598, max=27611, avg=7544.06, stdev=1024.74 00:32:03.387 clat (usec): min=170, max=545, avg=219.75, stdev=33.32 00:32:03.387 lat (usec): min=180, max=552, avg=227.29, stdev=33.33 00:32:03.387 clat percentiles (usec): 00:32:03.387 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 188], 00:32:03.387 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 239], 60.00th=[ 245], 00:32:03.387 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:32:03.387 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 400], 99.95th=[ 506], 00:32:03.387 | 99.99th=[ 545] 00:32:03.387 write: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:32:03.387 slat (nsec): min=8903, max=33893, avg=10480.76, stdev=1229.29 00:32:03.387 clat (usec): min=116, max=363, avg=147.27, stdev=24.08 00:32:03.387 lat (usec): min=130, max=397, avg=157.75, stdev=24.16 00:32:03.387 clat percentiles (usec): 00:32:03.387 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 130], 00:32:03.387 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 141], 00:32:03.387 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 186], 00:32:03.387 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 277], 00:32:03.387 | 99.99th=[ 363] 00:32:03.387 bw ( KiB/s): min=12288, max=12288, per=75.51%, avg=12288.00, stdev= 0.00, samples=1 00:32:03.387 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:03.387 lat (usec) : 250=91.21%, 500=8.75%, 750=0.04% 00:32:03.387 cpu : usr=2.40%, sys=4.80%, ctx=5142, majf=0, minf=1 00:32:03.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.387 issued rwts: total=2560,2581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.387 job1: (groupid=0, jobs=1): err= 0: pid=2885332: Wed Nov 20 10:10:36 2024 00:32:03.387 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:32:03.387 slat (nsec): min=12611, max=26693, avg=22658.50, stdev=2549.82 00:32:03.387 clat (usec): min=40884, max=41013, avg=40955.78, stdev=36.59 00:32:03.387 lat (usec): min=40911, max=41037, avg=40978.44, stdev=37.04 00:32:03.387 clat percentiles (usec): 00:32:03.387 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:03.387 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:03.387 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:03.387 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:03.387 | 99.99th=[41157] 00:32:03.387 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:03.387 slat (nsec): min=9369, max=39480, avg=11726.23, stdev=3057.51 00:32:03.387 clat (usec): min=146, max=301, avg=179.66, stdev=21.25 00:32:03.387 lat (usec): min=158, max=332, avg=191.39, stdev=22.26 00:32:03.387 clat percentiles (usec): 00:32:03.387 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:32:03.387 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:32:03.387 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 215], 00:32:03.387 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 302], 99.95th=[ 302], 00:32:03.387 | 99.99th=[ 302] 00:32:03.387 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:32:03.387 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:03.387 lat (usec) : 250=94.01%, 500=1.87% 00:32:03.387 lat (msec) : 50=4.12% 00:32:03.387 cpu : usr=0.40%, sys=0.60%, ctx=535, majf=0, minf=1 00:32:03.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.388 job2: (groupid=0, jobs=1): err= 0: pid=2885338: Wed Nov 20 10:10:36 2024 00:32:03.388 read: IOPS=36, BW=146KiB/s (150kB/s)(148KiB/1012msec) 00:32:03.388 slat (nsec): min=7384, max=19051, avg=9933.32, stdev=3378.80 00:32:03.388 clat (usec): min=250, max=41368, avg=24487.21, stdev=20278.18 00:32:03.388 lat (usec): min=258, max=41376, avg=24497.15, stdev=20276.86 00:32:03.388 clat percentiles (usec): 00:32:03.388 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:32:03.388 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[40633], 60.00th=[41157], 00:32:03.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:03.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:03.388 | 99.99th=[41157] 00:32:03.388 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:32:03.388 slat (nsec): min=9088, max=43847, avg=10408.38, stdev=2060.35 00:32:03.388 clat (usec): min=155, max=320, avg=192.27, stdev=18.85 00:32:03.388 lat (usec): min=165, max=346, avg=202.68, stdev=19.28 00:32:03.388 clat percentiles (usec): 00:32:03.388 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:32:03.388 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:32:03.388 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 223], 00:32:03.388 | 99.00th=[ 245], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 322], 00:32:03.388 | 99.99th=[ 322] 00:32:03.388 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:32:03.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:03.388 lat (usec) : 250=92.35%, 500=3.64% 00:32:03.388 lat (msec) : 50=4.01% 00:32:03.388 cpu : usr=0.10%, sys=0.59%, ctx=549, majf=0, minf=2 00:32:03.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.388 job3: (groupid=0, jobs=1): err= 0: pid=2885340: Wed Nov 20 10:10:36 2024 00:32:03.388 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:32:03.388 slat (nsec): min=9712, max=29865, avg=22272.86, stdev=3293.77 00:32:03.388 clat (usec): min=40697, max=41054, avg=40952.50, stdev=89.78 00:32:03.388 lat (usec): min=40706, max=41076, avg=40974.78, stdev=91.95 00:32:03.388 clat percentiles (usec): 00:32:03.388 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:03.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:03.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:03.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:03.388 | 99.99th=[41157] 00:32:03.388 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:32:03.388 slat (nsec): min=10209, max=41471, avg=11400.70, stdev=2269.77 00:32:03.388 clat (usec): min=149, max=367, avg=184.25, stdev=19.48 00:32:03.388 lat (usec): min=159, max=378, avg=195.65, stdev=20.03 00:32:03.388 clat percentiles (usec): 00:32:03.388 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:32:03.388 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:32:03.388 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:32:03.388 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 367], 99.95th=[ 367], 00:32:03.388 | 99.99th=[ 367] 00:32:03.388 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:32:03.388 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:03.388 lat (usec) : 250=95.51%, 500=0.37% 00:32:03.388 lat (msec) : 50=4.12% 00:32:03.388 cpu : usr=0.80%, sys=0.60%, ctx=534, majf=0, minf=2 00:32:03.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.388 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:03.388 00:32:03.388 Run status group 0 (all jobs): 00:32:03.388 READ: bw=10.2MiB/s (10.7MB/s), 87.7KiB/s-9.99MiB/s (89.8kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1012msec 00:32:03.388 WRITE: bw=15.9MiB/s (16.7MB/s), 2024KiB/s-10.1MiB/s (2072kB/s-10.6MB/s), io=16.1MiB (16.9MB), run=1001-1012msec 00:32:03.388 00:32:03.388 Disk stats (read/write): 00:32:03.388 nvme0n1: ios=2088/2488, merge=0/0, ticks=1595/352, in_queue=1947, util=97.09% 00:32:03.388 nvme0n2: ios=41/512, merge=0/0, ticks=1640/88, in_queue=1728, util=91.27% 00:32:03.388 nvme0n3: ios=83/512, merge=0/0, ticks=814/94, in_queue=908, util=90.64% 00:32:03.388 nvme0n4: ios=75/512, merge=0/0, ticks=808/88, in_queue=896, util=95.18% 00:32:03.388 10:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:03.388 [global] 00:32:03.388 thread=1 00:32:03.388 invalidate=1 00:32:03.388 rw=write 00:32:03.388 time_based=1 00:32:03.388 runtime=1 00:32:03.388 ioengine=libaio 00:32:03.388 direct=1 00:32:03.388 bs=4096 00:32:03.388 iodepth=128 00:32:03.388 norandommap=0 00:32:03.388 numjobs=1 00:32:03.388 00:32:03.388 verify_dump=1 00:32:03.388 verify_backlog=512 00:32:03.388 verify_state_save=0 00:32:03.388 do_verify=1 00:32:03.388 verify=crc32c-intel 00:32:03.388 [job0] 00:32:03.388 filename=/dev/nvme0n1 00:32:03.388 [job1] 00:32:03.388 filename=/dev/nvme0n2 00:32:03.388 [job2] 00:32:03.388 filename=/dev/nvme0n3 00:32:03.388 [job3] 00:32:03.388 filename=/dev/nvme0n4 00:32:03.388 Could not set queue depth (nvme0n1) 00:32:03.388 Could not set queue depth (nvme0n2) 00:32:03.388 Could not set queue depth (nvme0n3) 00:32:03.388 Could not set queue depth (nvme0n4) 00:32:03.388 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:03.388 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:03.388 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:03.388 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:03.388 fio-3.35 00:32:03.388 Starting 4 threads 00:32:04.759 00:32:04.759 job0: (groupid=0, jobs=1): err= 0: pid=2885706: Wed Nov 20 10:10:38 2024 00:32:04.759 read: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1010msec) 00:32:04.759 slat (nsec): min=1076, max=23435k, avg=135721.98, stdev=1099795.24 00:32:04.759 clat (usec): min=831, max=54565, avg=16979.26, stdev=8738.24 00:32:04.759 lat (usec): min=4182, max=54575, avg=17114.98, stdev=8814.76 00:32:04.759 clat percentiles (usec): 00:32:04.759 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10814], 00:32:04.759 | 30.00th=[11469], 40.00th=[12649], 50.00th=[15008], 60.00th=[16909], 00:32:04.759 | 70.00th=[18744], 80.00th=[21627], 90.00th=[27657], 95.00th=[36439], 00:32:04.759 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54264], 99.95th=[54789], 00:32:04.759 | 99.99th=[54789] 00:32:04.759 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:32:04.759 slat (nsec): min=1871, max=14711k, avg=99898.29, stdev=713321.70 00:32:04.759 clat (usec): min=1448, max=54420, avg=15005.38, stdev=6222.84 00:32:04.759 lat (usec): min=1460, max=54424, avg=15105.28, stdev=6273.10 00:32:04.759 clat percentiles (usec): 00:32:04.759 | 1.00th=[ 4047], 5.00th=[ 6063], 10.00th=[ 7570], 20.00th=[10028], 00:32:04.759 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13566], 60.00th=[16581], 00:32:04.759 | 70.00th=[18482], 80.00th=[20055], 90.00th=[21890], 95.00th=[25560], 00:32:04.759 | 99.00th=[30540], 99.50th=[35914], 99.90th=[37487], 99.95th=[52167], 00:32:04.759 | 99.99th=[54264] 00:32:04.759 bw ( KiB/s): min=12304, max=20464, per=22.30%, avg=16384.00, stdev=5769.99, samples=2 00:32:04.759 iops : min= 3076, max= 5116, avg=4096.00, stdev=1442.50, samples=2 00:32:04.759 lat (usec) : 1000=0.01% 00:32:04.759 lat (msec) : 2=0.10%, 4=0.33%, 10=16.98%, 20=60.42%, 50=21.51% 00:32:04.759 lat (msec) : 100=0.66% 00:32:04.759 cpu : usr=2.28%, sys=4.26%, ctx=323, majf=0, minf=1 00:32:04.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:04.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.759 issued rwts: total=3902,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.759 job1: (groupid=0, jobs=1): err= 0: pid=2885707: Wed Nov 20 10:10:38 2024 00:32:04.759 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:32:04.759 slat (nsec): min=1361, max=20518k, avg=141621.02, stdev=1177512.62 00:32:04.759 clat (usec): min=4208, max=41411, avg=18024.80, stdev=6075.23 00:32:04.759 lat (usec): min=4213, max=41438, avg=18166.42, stdev=6175.92 00:32:04.759 clat percentiles (usec): 00:32:04.759 | 1.00th=[ 6128], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:32:04.759 | 30.00th=[14091], 40.00th=[16188], 50.00th=[18220], 60.00th=[19006], 00:32:04.759 | 70.00th=[20579], 80.00th=[22938], 90.00th=[25822], 95.00th=[30540], 00:32:04.759 | 99.00th=[31851], 99.50th=[32637], 99.90th=[35914], 99.95th=[38536], 00:32:04.759 | 99.99th=[41157] 00:32:04.759 write: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1009msec); 0 zone resets 00:32:04.759 slat (usec): min=2, max=17200, avg=128.05, stdev=904.06 00:32:04.759 clat (usec): min=2974, max=36760, avg=17066.09, stdev=5869.77 00:32:04.759 lat (usec): min=3893, max=37723, avg=17194.14, stdev=5928.49 00:32:04.759 clat percentiles (usec): 00:32:04.760 | 1.00th=[ 5473], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11994], 00:32:04.760 | 30.00th=[12387], 40.00th=[15139], 50.00th=[17171], 60.00th=[18744], 00:32:04.760 | 70.00th=[20055], 80.00th=[20841], 90.00th=[23200], 95.00th=[29492], 00:32:04.760 | 99.00th=[33424], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:32:04.760 | 99.99th=[36963] 00:32:04.760 bw ( KiB/s): min=12336, max=16384, per=19.55%, avg=14360.00, stdev=2862.37, samples=2 00:32:04.760 iops : min= 3084, max= 4096, avg=3590.00, stdev=715.59, samples=2 00:32:04.760 lat (msec) : 4=0.11%, 10=5.09%, 20=62.78%, 50=32.03% 00:32:04.760 cpu : usr=3.17%, sys=4.17%, ctx=299, majf=0, minf=1 00:32:04.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:04.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.760 issued rwts: total=3584,3691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.760 job2: (groupid=0, jobs=1): err= 0: pid=2885708: Wed Nov 20 10:10:38 2024 00:32:04.760 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:32:04.760 slat (nsec): min=1223, max=16247k, avg=89861.28, stdev=615133.48 00:32:04.760 clat (usec): min=5273, max=27819, avg=12086.21, stdev=3086.79 00:32:04.760 lat (usec): min=5278, max=27826, avg=12176.07, stdev=3121.24 00:32:04.760 clat percentiles (usec): 00:32:04.760 | 1.00th=[ 5932], 5.00th=[ 8029], 10.00th=[10028], 20.00th=[10552], 00:32:04.760 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:32:04.760 | 70.00th=[11994], 80.00th=[12518], 90.00th=[15270], 95.00th=[17695], 00:32:04.760 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[27919], 00:32:04.760 | 99.99th=[27919] 00:32:04.760 write: IOPS=5189, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec); 0 zone resets 00:32:04.760 slat (nsec): min=1935, max=18312k, avg=94357.39, stdev=677699.19 00:32:04.760 clat (usec): min=1310, max=38229, avg=12508.16, stdev=3268.26 00:32:04.760 lat (usec): min=1322, max=38250, avg=12602.52, stdev=3333.89 00:32:04.760 clat percentiles (usec): 00:32:04.760 | 1.00th=[ 4359], 5.00th=[ 8356], 10.00th=[10159], 20.00th=[10814], 00:32:04.760 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:32:04.760 | 70.00th=[12780], 80.00th=[13829], 90.00th=[17171], 95.00th=[20055], 00:32:04.760 | 99.00th=[21890], 99.50th=[21890], 99.90th=[32113], 99.95th=[34341], 00:32:04.760 | 99.99th=[38011] 00:32:04.760 bw ( KiB/s): min=20480, max=20480, per=27.88%, avg=20480.00, stdev= 0.00, samples=2 00:32:04.760 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:32:04.760 lat (msec) : 2=0.03%, 4=0.18%, 10=9.25%, 20=86.30%, 50=4.23% 00:32:04.760 cpu : usr=3.69%, sys=5.98%, ctx=413, majf=0, minf=2 00:32:04.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:04.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.760 issued rwts: total=5120,5210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.760 job3: (groupid=0, jobs=1): err= 0: pid=2885709: Wed Nov 20 10:10:38 2024 00:32:04.760 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:32:04.760 slat (nsec): min=1746, max=6614.5k, avg=90173.26, stdev=561249.32 00:32:04.760 clat (usec): min=6778, max=20885, avg=11898.68, stdev=1956.73 00:32:04.760 lat (usec): min=7079, max=20893, avg=11988.86, stdev=1974.72 00:32:04.760 clat percentiles (usec): 00:32:04.760 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10421], 00:32:04.760 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[12256], 00:32:04.760 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14615], 95.00th=[15401], 00:32:04.760 | 99.00th=[16712], 99.50th=[17433], 99.90th=[20317], 99.95th=[20317], 00:32:04.760 | 99.99th=[20841] 00:32:04.760 write: IOPS=5529, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1004msec); 0 zone resets 00:32:04.760 slat (usec): min=2, max=6656, avg=90.72, stdev=556.61 00:32:04.760 clat (usec): min=641, max=19609, avg=11845.28, stdev=1485.59 00:32:04.760 lat (usec): min=5353, max=20080, avg=11936.00, stdev=1557.67 00:32:04.760 clat percentiles (usec): 00:32:04.760 | 1.00th=[ 6128], 5.00th=[10290], 10.00th=[10814], 20.00th=[11076], 00:32:04.760 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:32:04.760 | 70.00th=[12125], 80.00th=[13042], 90.00th=[13435], 95.00th=[14091], 00:32:04.760 | 99.00th=[16188], 99.50th=[17695], 99.90th=[18744], 99.95th=[19268], 00:32:04.760 | 99.99th=[19530] 00:32:04.760 bw ( KiB/s): min=20480, max=22912, per=29.53%, avg=21696.00, stdev=1719.68, samples=2 00:32:04.760 iops : min= 5120, max= 5728, avg=5424.00, stdev=429.92, samples=2 00:32:04.760 lat (usec) : 750=0.01% 00:32:04.760 lat (msec) : 10=8.36%, 20=91.50%, 50=0.13% 00:32:04.760 cpu : usr=5.08%, sys=6.78%, ctx=420, majf=0, minf=1 00:32:04.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:04.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.760 issued rwts: total=5120,5552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.760 00:32:04.760 Run status group 0 (all jobs): 00:32:04.760 READ: bw=68.6MiB/s (71.9MB/s), 13.9MiB/s-19.9MiB/s (14.5MB/s-20.9MB/s), io=69.2MiB (72.6MB), run=1004-1010msec 00:32:04.760 WRITE: bw=71.7MiB/s (75.2MB/s), 14.3MiB/s-21.6MiB/s (15.0MB/s-22.7MB/s), io=72.5MiB (76.0MB), run=1004-1010msec 00:32:04.760 00:32:04.760 Disk stats (read/write): 00:32:04.760 nvme0n1: ios=3283/3584, merge=0/0, ticks=49418/46534, in_queue=95952, util=91.78% 00:32:04.760 nvme0n2: ios=2925/3072, merge=0/0, ticks=49946/52271, in_queue=102217, util=96.02% 00:32:04.760 nvme0n3: ios=4119/4403, merge=0/0, ticks=28049/32536, in_queue=60585, util=99.47% 00:32:04.760 nvme0n4: ios=4182/4608, merge=0/0, ticks=25174/25422, in_queue=50596, util=99.14% 00:32:04.760 10:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:04.760 [global] 00:32:04.760 thread=1 00:32:04.760 invalidate=1 00:32:04.760 rw=randwrite 00:32:04.760 time_based=1 00:32:04.760 runtime=1 00:32:04.760 ioengine=libaio 00:32:04.760 direct=1 00:32:04.760 bs=4096 00:32:04.760 iodepth=128 00:32:04.760 norandommap=0 00:32:04.760 numjobs=1 00:32:04.760 00:32:04.760 verify_dump=1 00:32:04.760 verify_backlog=512 00:32:04.760 verify_state_save=0 00:32:04.760 do_verify=1 00:32:04.760 verify=crc32c-intel 00:32:04.760 [job0] 00:32:04.760 filename=/dev/nvme0n1 00:32:04.760 [job1] 00:32:04.760 filename=/dev/nvme0n2 00:32:04.760 [job2] 00:32:04.760 filename=/dev/nvme0n3 00:32:04.760 [job3] 00:32:04.760 filename=/dev/nvme0n4 00:32:04.760 Could not set queue depth (nvme0n1) 00:32:04.760 Could not set queue depth (nvme0n2) 00:32:04.760 Could not set queue depth (nvme0n3) 00:32:04.760 Could not set queue depth (nvme0n4) 00:32:05.017 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:05.017 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:05.017 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:05.017 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:05.017 fio-3.35 00:32:05.017 Starting 4 threads 00:32:06.387 00:32:06.387 job0: (groupid=0, jobs=1): err= 0: pid=2886082: Wed Nov 20 10:10:39 2024 00:32:06.387 read: IOPS=4502, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1003msec) 00:32:06.387 slat (nsec): min=1109, max=19095k, avg=108707.11, stdev=753784.24 00:32:06.387 clat (usec): min=703, max=35599, avg=13877.19, stdev=4558.28 00:32:06.387 lat (usec): min=3234, max=35623, avg=13985.89, stdev=4582.08 00:32:06.387 clat percentiles (usec): 00:32:06.387 | 1.00th=[ 5669], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10421], 00:32:06.387 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12780], 60.00th=[13698], 00:32:06.387 | 70.00th=[15008], 80.00th=[17695], 90.00th=[20841], 95.00th=[22676], 00:32:06.387 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27395], 99.95th=[28443], 00:32:06.387 | 99.99th=[35390] 00:32:06.387 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:32:06.387 slat (nsec): min=1943, max=25208k, avg=103832.66, stdev=825722.10 00:32:06.387 clat (usec): min=2582, max=51420, avg=14017.56, stdev=6717.68 00:32:06.387 lat (usec): min=2590, max=51431, avg=14121.40, stdev=6777.03 00:32:06.387 clat percentiles (usec): 00:32:06.387 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 8225], 20.00th=[10028], 00:32:06.387 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11863], 60.00th=[13435], 00:32:06.387 | 70.00th=[15008], 80.00th=[18220], 90.00th=[20841], 95.00th=[26346], 00:32:06.387 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:32:06.387 | 99.99th=[51643] 00:32:06.387 bw ( KiB/s): min=18352, max=18512, per=25.43%, avg=18432.00, stdev=113.14, samples=2 00:32:06.387 iops : min= 4588, max= 4628, avg=4608.00, stdev=28.28, samples=2 00:32:06.387 lat (usec) : 750=0.01% 00:32:06.387 lat (msec) : 4=0.33%, 10=16.59%, 20=68.72%, 50=14.32%, 100=0.02% 00:32:06.387 cpu : usr=2.79%, sys=4.59%, ctx=374, majf=0, minf=1 00:32:06.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:06.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.387 issued rwts: total=4516,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.387 job1: (groupid=0, jobs=1): err= 0: pid=2886083: Wed Nov 20 10:10:39 2024 00:32:06.387 read: IOPS=5577, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1002msec) 00:32:06.387 slat (nsec): min=1065, max=14254k, avg=94336.68, stdev=639682.92 00:32:06.387 clat (usec): min=625, max=40213, avg=11943.89, stdev=5602.20 00:32:06.387 lat (usec): min=1124, max=40219, avg=12038.23, stdev=5625.58 00:32:06.387 clat percentiles (usec): 00:32:06.387 | 1.00th=[ 2311], 5.00th=[ 6259], 10.00th=[ 7963], 20.00th=[ 8979], 00:32:06.387 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10421], 60.00th=[10945], 00:32:06.387 | 70.00th=[11994], 80.00th=[13698], 90.00th=[17957], 95.00th=[23725], 00:32:06.388 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38536], 99.95th=[40109], 00:32:06.388 | 99.99th=[40109] 00:32:06.388 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:32:06.388 slat (nsec): min=1741, max=16323k, avg=77626.07, stdev=532515.47 00:32:06.388 clat (usec): min=729, max=40095, avg=10599.02, stdev=4178.59 00:32:06.388 lat (usec): min=736, max=40101, avg=10676.64, stdev=4211.65 00:32:06.388 clat percentiles (usec): 00:32:06.388 | 1.00th=[ 2769], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 8455], 00:32:06.388 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:32:06.388 | 70.00th=[11076], 80.00th=[11994], 90.00th=[13173], 95.00th=[17433], 00:32:06.388 | 99.00th=[28181], 99.50th=[30016], 99.90th=[33817], 99.95th=[33817], 00:32:06.388 | 99.99th=[40109] 00:32:06.388 bw ( KiB/s): min=20480, max=24576, per=31.08%, avg=22528.00, stdev=2896.31, samples=2 00:32:06.388 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:32:06.388 lat (usec) : 750=0.03%, 1000=0.08% 00:32:06.388 lat (msec) : 2=0.63%, 4=1.30%, 10=36.95%, 20=54.88%, 50=6.13% 00:32:06.388 cpu : usr=3.30%, sys=4.80%, ctx=465, majf=0, minf=1 00:32:06.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:06.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.388 issued rwts: total=5589,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.388 job2: (groupid=0, jobs=1): err= 0: pid=2886084: Wed Nov 20 10:10:39 2024 00:32:06.388 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:32:06.388 slat (nsec): min=1068, max=20741k, avg=118034.11, stdev=837248.31 00:32:06.388 clat (usec): min=6605, max=52683, avg=15119.68, stdev=7386.40 00:32:06.388 lat (usec): min=6611, max=52686, avg=15237.71, stdev=7432.06 00:32:06.388 clat percentiles (usec): 00:32:06.388 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10945], 00:32:06.388 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12649], 60.00th=[13829], 00:32:06.388 | 70.00th=[15401], 80.00th=[17695], 90.00th=[24249], 95.00th=[33817], 00:32:06.388 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:32:06.388 | 99.99th=[52691] 00:32:06.388 write: IOPS=4245, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1003msec); 0 zone resets 00:32:06.388 slat (nsec): min=1857, max=21480k, avg=116786.43, stdev=795589.44 00:32:06.388 clat (usec): min=288, max=38419, avg=15305.04, stdev=5700.82 00:32:06.388 lat (usec): min=649, max=38434, avg=15421.82, stdev=5755.83 00:32:06.388 clat percentiles (usec): 00:32:06.388 | 1.00th=[ 4883], 5.00th=[ 7832], 10.00th=[10290], 20.00th=[11207], 00:32:06.388 | 30.00th=[11469], 40.00th=[11731], 50.00th=[13566], 60.00th=[15401], 00:32:06.388 | 70.00th=[17957], 80.00th=[20841], 90.00th=[22938], 95.00th=[25560], 00:32:06.388 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:32:06.388 | 99.99th=[38536] 00:32:06.388 bw ( KiB/s): min=14584, max=18456, per=22.79%, avg=16520.00, stdev=2737.92, samples=2 00:32:06.388 iops : min= 3646, max= 4614, avg=4130.00, stdev=684.48, samples=2 00:32:06.388 lat (usec) : 500=0.01%, 750=0.04% 00:32:06.388 lat (msec) : 10=10.51%, 20=70.35%, 50=19.08%, 100=0.01% 00:32:06.388 cpu : usr=1.50%, sys=5.09%, ctx=387, majf=0, minf=1 00:32:06.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:06.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.388 issued rwts: total=4096,4258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.388 job3: (groupid=0, jobs=1): err= 0: pid=2886085: Wed Nov 20 10:10:39 2024 00:32:06.388 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:32:06.388 slat (nsec): min=1103, max=14677k, avg=130501.13, stdev=865341.58 00:32:06.388 clat (usec): min=1663, max=52998, avg=17111.26, stdev=6767.03 00:32:06.388 lat (usec): min=1668, max=53000, avg=17241.76, stdev=6809.67 00:32:06.388 clat percentiles (usec): 00:32:06.388 | 1.00th=[ 4047], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[12649], 00:32:06.388 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14877], 60.00th=[16581], 00:32:06.388 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26084], 95.00th=[30802], 00:32:06.388 | 99.00th=[40109], 99.50th=[41157], 99.90th=[51643], 99.95th=[51643], 00:32:06.388 | 99.99th=[53216] 00:32:06.388 write: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1002msec); 0 zone resets 00:32:06.388 slat (nsec): min=1967, max=21881k, avg=137507.85, stdev=912289.07 00:32:06.388 clat (usec): min=683, max=60248, avg=17743.29, stdev=8669.84 00:32:06.388 lat (usec): min=4677, max=60278, avg=17880.80, stdev=8741.79 00:32:06.388 clat percentiles (usec): 00:32:06.388 | 1.00th=[ 5604], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11338], 00:32:06.388 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13829], 60.00th=[16319], 00:32:06.388 | 70.00th=[20317], 80.00th=[22676], 90.00th=[32375], 95.00th=[38011], 00:32:06.388 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[56886], 00:32:06.388 | 99.99th=[60031] 00:32:06.388 bw ( KiB/s): min=13488, max=15216, per=19.80%, avg=14352.00, stdev=1221.88, samples=2 00:32:06.388 iops : min= 3372, max= 3804, avg=3588.00, stdev=305.47, samples=2 00:32:06.388 lat (usec) : 750=0.01% 00:32:06.388 lat (msec) : 2=0.36%, 10=6.25%, 20=64.45%, 50=28.77%, 100=0.15% 00:32:06.388 cpu : usr=2.40%, sys=3.50%, ctx=361, majf=0, minf=1 00:32:06.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:06.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.388 issued rwts: total=3584,3680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.388 00:32:06.388 Run status group 0 (all jobs): 00:32:06.388 READ: bw=69.3MiB/s (72.6MB/s), 14.0MiB/s-21.8MiB/s (14.7MB/s-22.8MB/s), io=69.5MiB (72.8MB), run=1002-1003msec 00:32:06.388 WRITE: bw=70.8MiB/s (74.2MB/s), 14.3MiB/s-22.0MiB/s (15.0MB/s-23.0MB/s), io=71.0MiB (74.5MB), run=1002-1003msec 00:32:06.388 00:32:06.388 Disk stats (read/write): 00:32:06.388 nvme0n1: ios=4065/4096, merge=0/0, ticks=31124/28460, in_queue=59584, util=90.87% 00:32:06.388 nvme0n2: ios=4658/4937, merge=0/0, ticks=31056/28470, in_queue=59526, util=88.63% 00:32:06.388 nvme0n3: ios=3570/3584, merge=0/0, ticks=23206/27201, in_queue=50407, util=97.51% 00:32:06.388 nvme0n4: ios=2701/3072, merge=0/0, ticks=21942/23150, in_queue=45092, util=98.54% 00:32:06.388 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:06.388 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2886314 00:32:06.388 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:06.388 10:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:06.388 [global] 00:32:06.388 thread=1 00:32:06.388 invalidate=1 00:32:06.388 rw=read 00:32:06.388 time_based=1 00:32:06.388 runtime=10 00:32:06.388 ioengine=libaio 00:32:06.388 direct=1 00:32:06.388 bs=4096 00:32:06.388 iodepth=1 00:32:06.388 norandommap=1 00:32:06.388 numjobs=1 00:32:06.388 00:32:06.388 [job0] 00:32:06.388 filename=/dev/nvme0n1 00:32:06.388 [job1] 00:32:06.388 filename=/dev/nvme0n2 00:32:06.388 [job2] 00:32:06.388 filename=/dev/nvme0n3 00:32:06.389 [job3] 00:32:06.389 filename=/dev/nvme0n4 00:32:06.389 Could not set queue depth (nvme0n1) 00:32:06.389 Could not set queue depth (nvme0n2) 00:32:06.389 Could not set queue depth (nvme0n3) 00:32:06.389 Could not set queue depth (nvme0n4) 00:32:06.645 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.645 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.645 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.645 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.645 fio-3.35 00:32:06.645 Starting 4 threads 00:32:09.167 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:09.423 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5914624, buflen=4096 00:32:09.423 fio: pid=2886454, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:09.423 10:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:09.679 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:09.679 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:09.679 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=335872, buflen=4096 00:32:09.679 fio: pid=2886453, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:09.935 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56483840, buflen=4096 00:32:09.935 fio: pid=2886451, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:09.935 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:09.935 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:10.192 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:10.192 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:10.192 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=339968, buflen=4096 00:32:10.192 fio: pid=2886452, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:32:10.192 00:32:10.192 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2886451: Wed Nov 20 10:10:43 2024 00:32:10.192 read: IOPS=4414, BW=17.2MiB/s (18.1MB/s)(53.9MiB/3124msec) 00:32:10.192 slat (usec): min=6, max=11657, avg=10.30, stdev=185.96 00:32:10.192 clat (usec): min=173, max=756, avg=213.57, stdev=20.37 00:32:10.192 lat (usec): min=180, max=12187, avg=223.86, stdev=191.89 00:32:10.192 clat percentiles (usec): 00:32:10.192 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:32:10.192 | 30.00th=[ 204], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:32:10.192 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 253], 00:32:10.192 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 379], 99.95th=[ 506], 00:32:10.192 | 99.99th=[ 709] 00:32:10.192 bw ( KiB/s): min=14422, max=18768, per=97.42%, avg=17901.00, stdev=1719.84, samples=6 00:32:10.192 iops : min= 3605, max= 4692, avg=4475.17, stdev=430.16, samples=6 00:32:10.192 lat (usec) : 250=92.69%, 500=7.25%, 750=0.04%, 1000=0.01% 00:32:10.192 cpu : usr=0.90%, sys=4.13%, ctx=13797, majf=0, minf=1 00:32:10.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 issued rwts: total=13791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.192 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2886452: Wed Nov 20 10:10:43 2024 00:32:10.192 read: IOPS=25, BW=99.0KiB/s (101kB/s)(332KiB/3352msec) 00:32:10.192 slat (usec): min=13, max=16870, avg=344.27, stdev=2014.95 00:32:10.192 clat (usec): min=263, max=42013, avg=40000.32, stdev=6273.25 00:32:10.192 lat (usec): min=288, max=57987, avg=40260.20, stdev=6586.36 00:32:10.192 clat percentiles (usec): 00:32:10.192 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:10.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:10.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:10.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:10.192 | 99.99th=[42206] 00:32:10.192 bw ( KiB/s): min= 96, max= 104, per=0.54%, avg=99.17, stdev= 3.92, samples=6 00:32:10.192 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:32:10.192 lat (usec) : 500=2.38% 00:32:10.192 lat (msec) : 50=96.43% 00:32:10.192 cpu : usr=0.09%, sys=0.21%, ctx=89, majf=0, minf=2 00:32:10.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.192 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2886453: Wed Nov 20 10:10:43 2024 00:32:10.192 read: IOPS=28, BW=111KiB/s (114kB/s)(328KiB/2942msec) 00:32:10.192 slat (nsec): min=8070, max=32498, avg=20540.53, stdev=5784.59 00:32:10.192 clat (usec): min=238, max=42012, avg=35586.78, stdev=13961.18 00:32:10.192 lat (usec): min=246, max=42035, avg=35607.27, stdev=13962.89 00:32:10.192 clat percentiles (usec): 00:32:10.192 | 1.00th=[ 239], 5.00th=[ 281], 10.00th=[ 371], 20.00th=[41157], 00:32:10.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:10.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:10.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:10.192 | 99.99th=[42206] 00:32:10.192 bw ( KiB/s): min= 96, max= 104, per=0.53%, avg=97.60, stdev= 3.58, samples=5 00:32:10.192 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:32:10.192 lat (usec) : 250=2.41%, 500=8.43%, 750=2.41% 00:32:10.192 lat (msec) : 50=85.54% 00:32:10.192 cpu : usr=0.03%, sys=0.03%, ctx=83, majf=0, minf=2 00:32:10.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.192 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.192 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2886454: Wed Nov 20 10:10:43 2024 00:32:10.192 read: IOPS=532, BW=2127KiB/s (2178kB/s)(5776KiB/2716msec) 00:32:10.192 slat (nsec): min=6608, max=34040, avg=8169.49, stdev=3387.90 00:32:10.192 clat (usec): min=185, max=45019, avg=1855.82, stdev=7979.96 00:32:10.192 lat (usec): min=194, max=45048, avg=1863.99, stdev=7983.07 00:32:10.193 clat percentiles (usec): 00:32:10.193 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 210], 00:32:10.193 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:32:10.193 | 70.00th=[ 225], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 302], 00:32:10.193 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[44827], 00:32:10.193 | 99.99th=[44827] 00:32:10.193 bw ( KiB/s): min= 96, max= 4576, per=5.40%, avg=993.60, stdev=2002.63, samples=5 00:32:10.193 iops : min= 24, max= 1144, avg=248.40, stdev=500.66, samples=5 00:32:10.193 lat (usec) : 250=80.28%, 500=15.64% 00:32:10.193 lat (msec) : 50=4.01% 00:32:10.193 cpu : usr=0.04%, sys=0.70%, ctx=1445, majf=0, minf=2 00:32:10.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.193 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.193 issued rwts: total=1445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.193 00:32:10.193 Run status group 0 (all jobs): 00:32:10.193 READ: bw=17.9MiB/s (18.8MB/s), 99.0KiB/s-17.2MiB/s (101kB/s-18.1MB/s), io=60.2MiB (63.1MB), run=2716-3352msec 00:32:10.193 00:32:10.193 Disk stats (read/write): 00:32:10.193 nvme0n1: ios=13790/0, merge=0/0, ticks=2882/0, in_queue=2882, util=94.48% 00:32:10.193 nvme0n2: ios=108/0, merge=0/0, ticks=3658/0, in_queue=3658, util=99.23% 00:32:10.193 nvme0n3: ios=80/0, merge=0/0, ticks=2836/0, in_queue=2836, util=96.55% 00:32:10.193 nvme0n4: ios=1048/0, merge=0/0, ticks=2595/0, in_queue=2595, util=96.45% 00:32:10.449 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:10.449 10:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:10.449 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:10.449 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:10.706 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:10.706 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:10.963 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:10.963 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2886314 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:11.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:11.219 nvmf hotplug test: fio failed as expected 00:32:11.219 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.476 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:11.476 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.477 rmmod nvme_tcp 00:32:11.477 rmmod nvme_fabrics 00:32:11.477 rmmod nvme_keyring 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2883614 ']' 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2883614 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2883614 ']' 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2883614 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.477 10:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883614 00:32:11.477 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.477 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.477 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883614' 00:32:11.477 killing process with pid 2883614 00:32:11.477 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2883614 00:32:11.477 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2883614 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.736 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.737 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.737 10:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.274 00:32:14.274 real 0m26.410s 00:32:14.274 user 1m31.202s 00:32:14.274 sys 0m10.954s 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.274 ************************************ 00:32:14.274 END TEST nvmf_fio_target 00:32:14.274 ************************************ 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:14.274 ************************************ 00:32:14.274 START TEST nvmf_bdevio 00:32:14.274 ************************************ 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:14.274 * Looking for test storage... 00:32:14.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.274 --rc genhtml_branch_coverage=1 00:32:14.274 --rc genhtml_function_coverage=1 00:32:14.274 --rc genhtml_legend=1 00:32:14.274 --rc geninfo_all_blocks=1 00:32:14.274 --rc geninfo_unexecuted_blocks=1 00:32:14.274 00:32:14.274 ' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.274 --rc genhtml_branch_coverage=1 00:32:14.274 --rc genhtml_function_coverage=1 00:32:14.274 --rc genhtml_legend=1 00:32:14.274 --rc geninfo_all_blocks=1 00:32:14.274 --rc geninfo_unexecuted_blocks=1 00:32:14.274 00:32:14.274 ' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.274 --rc genhtml_branch_coverage=1 00:32:14.274 --rc genhtml_function_coverage=1 00:32:14.274 --rc genhtml_legend=1 00:32:14.274 --rc geninfo_all_blocks=1 00:32:14.274 --rc geninfo_unexecuted_blocks=1 00:32:14.274 00:32:14.274 ' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.274 --rc genhtml_branch_coverage=1 00:32:14.274 --rc genhtml_function_coverage=1 00:32:14.274 --rc genhtml_legend=1 00:32:14.274 --rc geninfo_all_blocks=1 00:32:14.274 --rc geninfo_unexecuted_blocks=1 00:32:14.274 00:32:14.274 ' 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.274 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:14.275 10:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:20.846 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:20.846 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:20.846 Found net devices under 0000:86:00.0: cvl_0_0 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:20.846 Found net devices under 0000:86:00.1: cvl_0_1 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.846 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:32:20.847 00:32:20.847 --- 10.0.0.2 ping statistics --- 00:32:20.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.847 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:32:20.847 00:32:20.847 --- 10.0.0.1 ping statistics --- 00:32:20.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.847 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2890690 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2890690 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2890690 ']' 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 [2024-11-20 10:10:53.572300] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:20.847 [2024-11-20 10:10:53.573168] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:32:20.847 [2024-11-20 10:10:53.573199] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.847 [2024-11-20 10:10:53.649045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:20.847 [2024-11-20 10:10:53.690293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.847 [2024-11-20 10:10:53.690329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.847 [2024-11-20 10:10:53.690336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.847 [2024-11-20 10:10:53.690342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.847 [2024-11-20 10:10:53.690347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.847 [2024-11-20 10:10:53.691959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:20.847 [2024-11-20 10:10:53.692066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:20.847 [2024-11-20 10:10:53.692173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.847 [2024-11-20 10:10:53.692174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:20.847 [2024-11-20 10:10:53.757547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:20.847 [2024-11-20 10:10:53.758603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:20.847 [2024-11-20 10:10:53.758613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:20.847 [2024-11-20 10:10:53.758866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:20.847 [2024-11-20 10:10:53.758944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 [2024-11-20 10:10:53.832956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 Malloc0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.847 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:20.848 [2024-11-20 10:10:53.917187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:20.848 { 00:32:20.848 "params": { 00:32:20.848 "name": "Nvme$subsystem", 00:32:20.848 "trtype": "$TEST_TRANSPORT", 00:32:20.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.848 "adrfam": "ipv4", 00:32:20.848 "trsvcid": "$NVMF_PORT", 00:32:20.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.848 "hdgst": ${hdgst:-false}, 00:32:20.848 "ddgst": ${ddgst:-false} 00:32:20.848 }, 00:32:20.848 "method": "bdev_nvme_attach_controller" 00:32:20.848 } 00:32:20.848 EOF 00:32:20.848 )") 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:20.848 10:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:20.848 "params": { 00:32:20.848 "name": "Nvme1", 00:32:20.848 "trtype": "tcp", 00:32:20.848 "traddr": "10.0.0.2", 00:32:20.848 "adrfam": "ipv4", 00:32:20.848 "trsvcid": "4420", 00:32:20.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.848 "hdgst": false, 00:32:20.848 "ddgst": false 00:32:20.848 }, 00:32:20.848 "method": "bdev_nvme_attach_controller" 00:32:20.848 }' 00:32:20.848 [2024-11-20 10:10:53.967975] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:32:20.848 [2024-11-20 10:10:53.968025] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890777 ] 00:32:20.848 [2024-11-20 10:10:54.044668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:20.848 [2024-11-20 10:10:54.089386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.848 [2024-11-20 10:10:54.089491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.848 [2024-11-20 10:10:54.089492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.848 I/O targets: 00:32:20.848 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:20.848 00:32:20.848 00:32:20.848 CUnit - A unit testing framework for C - Version 2.1-3 00:32:20.848 http://cunit.sourceforge.net/ 00:32:20.848 00:32:20.848 00:32:20.848 Suite: bdevio tests on: Nvme1n1 00:32:20.848 Test: blockdev write read block ...passed 00:32:20.848 Test: blockdev write zeroes read block ...passed 00:32:20.848 Test: blockdev write zeroes read no split ...passed 00:32:20.848 Test: blockdev write zeroes read split ...passed 00:32:20.848 Test: blockdev write zeroes read split partial ...passed 00:32:20.848 Test: blockdev reset ...[2024-11-20 10:10:54.392507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:20.848 [2024-11-20 10:10:54.392569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdaf340 (9): Bad file descriptor 00:32:21.105 [2024-11-20 10:10:54.445492] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:21.105 passed 00:32:21.105 Test: blockdev write read 8 blocks ...passed 00:32:21.105 Test: blockdev write read size > 128k ...passed 00:32:21.105 Test: blockdev write read invalid size ...passed 00:32:21.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:21.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:21.106 Test: blockdev write read max offset ...passed 00:32:21.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:21.106 Test: blockdev writev readv 8 blocks ...passed 00:32:21.106 Test: blockdev writev readv 30 x 1block ...passed 00:32:21.106 Test: blockdev writev readv block ...passed 00:32:21.106 Test: blockdev writev readv size > 128k ...passed 00:32:21.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:21.106 Test: blockdev comparev and writev ...[2024-11-20 10:10:54.660184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.660231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.660522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.660545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.660832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.660853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.660860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.661142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.661155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:21.106 [2024-11-20 10:10:54.661167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:21.106 [2024-11-20 10:10:54.661174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:21.364 passed 00:32:21.364 Test: blockdev nvme passthru rw ...passed 00:32:21.364 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:10:54.744552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:21.364 [2024-11-20 10:10:54.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:21.364 [2024-11-20 10:10:54.744690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:21.364 [2024-11-20 10:10:54.744700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:21.364 [2024-11-20 10:10:54.744806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:21.364 [2024-11-20 10:10:54.744816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:21.364 [2024-11-20 10:10:54.744923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:21.364 [2024-11-20 10:10:54.744932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:21.364 passed 00:32:21.364 Test: blockdev nvme admin passthru ...passed 00:32:21.364 Test: blockdev copy ...passed 00:32:21.364 00:32:21.364 Run Summary: Type Total Ran Passed Failed Inactive 00:32:21.364 suites 1 1 n/a 0 0 00:32:21.364 tests 23 23 23 0 0 00:32:21.364 asserts 152 152 152 0 n/a 00:32:21.364 00:32:21.364 Elapsed time = 1.054 seconds 00:32:21.364 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.364 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.364 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.622 10:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.622 rmmod nvme_tcp 00:32:21.622 rmmod nvme_fabrics 00:32:21.622 rmmod nvme_keyring 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2890690 ']' 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2890690 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2890690 ']' 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2890690 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890690 00:32:21.622 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:21.623 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:21.623 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890690' 00:32:21.623 killing process with pid 2890690 00:32:21.623 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2890690 00:32:21.623 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2890690 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.882 10:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.788 00:32:23.788 real 0m9.973s 00:32:23.788 user 0m8.564s 00:32:23.788 sys 0m5.192s 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:23.788 ************************************ 00:32:23.788 END TEST nvmf_bdevio 00:32:23.788 ************************************ 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:23.788 00:32:23.788 real 4m35.516s 00:32:23.788 user 9m9.502s 00:32:23.788 sys 1m51.698s 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.788 10:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:23.788 ************************************ 00:32:23.788 END TEST nvmf_target_core_interrupt_mode 00:32:23.788 ************************************ 00:32:24.048 10:10:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:24.048 10:10:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.048 10:10:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.048 10:10:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.048 ************************************ 00:32:24.048 START TEST nvmf_interrupt 00:32:24.048 ************************************ 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:24.048 * Looking for test storage... 00:32:24.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.048 --rc genhtml_branch_coverage=1 00:32:24.048 --rc genhtml_function_coverage=1 00:32:24.048 --rc genhtml_legend=1 00:32:24.048 --rc geninfo_all_blocks=1 00:32:24.048 --rc geninfo_unexecuted_blocks=1 00:32:24.048 00:32:24.048 ' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.048 --rc genhtml_branch_coverage=1 00:32:24.048 --rc genhtml_function_coverage=1 00:32:24.048 --rc genhtml_legend=1 00:32:24.048 --rc geninfo_all_blocks=1 00:32:24.048 --rc geninfo_unexecuted_blocks=1 00:32:24.048 00:32:24.048 ' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.048 --rc genhtml_branch_coverage=1 00:32:24.048 --rc genhtml_function_coverage=1 00:32:24.048 --rc genhtml_legend=1 00:32:24.048 --rc geninfo_all_blocks=1 00:32:24.048 --rc geninfo_unexecuted_blocks=1 00:32:24.048 00:32:24.048 ' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.048 --rc genhtml_branch_coverage=1 00:32:24.048 --rc genhtml_function_coverage=1 00:32:24.048 --rc genhtml_legend=1 00:32:24.048 --rc geninfo_all_blocks=1 00:32:24.048 --rc geninfo_unexecuted_blocks=1 00:32:24.048 00:32:24.048 ' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.048 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:24.308 10:10:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.880 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:30.881 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:30.881 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:30.881 Found net devices under 0000:86:00.0: cvl_0_0 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:30.881 Found net devices under 0000:86:00.1: cvl_0_1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:32:30.881 00:32:30.881 --- 10.0.0.2 ping statistics --- 00:32:30.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.881 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:32:30.881 00:32:30.881 --- 10.0.0.1 ping statistics --- 00:32:30.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.881 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2894481 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2894481 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2894481 ']' 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.881 10:11:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.882 [2024-11-20 10:11:03.623563] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.882 [2024-11-20 10:11:03.624452] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:32:30.882 [2024-11-20 10:11:03.624486] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.882 [2024-11-20 10:11:03.703509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:30.882 [2024-11-20 10:11:03.742643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.882 [2024-11-20 10:11:03.742679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.882 [2024-11-20 10:11:03.742688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.882 [2024-11-20 10:11:03.742694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.882 [2024-11-20 10:11:03.742701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.882 [2024-11-20 10:11:03.743906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.882 [2024-11-20 10:11:03.743907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.882 [2024-11-20 10:11:03.811035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.882 [2024-11-20 10:11:03.811559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:30.882 [2024-11-20 10:11:03.811792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:30.882 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.882 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:30.882 10:11:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:30.882 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:30.882 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:31.141 5000+0 records in 00:32:31.141 5000+0 records out 00:32:31.141 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0177307 s, 578 MB/s 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 AIO0 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 [2024-11-20 10:11:04.564744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.141 [2024-11-20 10:11:04.604990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2894481 0 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 0 idle 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.141 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:31.142 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894481 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894481 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2894481 1 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 1 idle 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894485 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894485 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.402 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:31.660 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2894744 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2894481 0 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2894481 0 busy 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:31.661 10:11:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894481 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0' 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894481 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2894481 1 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2894481 1 busy 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:31.661 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:31.918 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894485 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.28 reactor_1' 00:32:31.918 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894485 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.28 reactor_1 00:32:31.918 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.918 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.918 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.919 10:11:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2894744 00:32:41.883 Initializing NVMe Controllers 00:32:41.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:41.883 Controller IO queue size 256, less than required. 00:32:41.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:41.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:41.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:41.884 Initialization complete. Launching workers. 00:32:41.884 ======================================================== 00:32:41.884 Latency(us) 00:32:41.884 Device Information : IOPS MiB/s Average min max 00:32:41.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16785.32 65.57 15258.92 3443.79 28877.24 00:32:41.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16929.62 66.13 15124.55 7648.45 25146.45 00:32:41.884 ======================================================== 00:32:41.884 Total : 33714.94 131.70 15191.45 3443.79 28877.24 00:32:41.884 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2894481 0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 0 idle 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894481 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894481 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2894481 1 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 1 idle 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:41.884 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894485 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894485 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.143 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:42.407 10:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:42.407 10:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:42.407 10:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:42.407 10:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:42.407 10:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2894481 0 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 0 idle 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:44.440 10:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894481 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.48 reactor_0' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894481 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.48 reactor_0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2894481 1 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2894481 1 idle 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2894481 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2894481 -w 256 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2894485 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2894485 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.699 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:44.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.958 rmmod nvme_tcp 00:32:44.958 rmmod nvme_fabrics 00:32:44.958 rmmod nvme_keyring 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2894481 ']' 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2894481 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2894481 ']' 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2894481 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.958 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894481 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894481' 00:32:45.218 killing process with pid 2894481 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2894481 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2894481 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.218 10:11:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.762 10:11:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.762 00:32:47.762 real 0m23.403s 00:32:47.762 user 0m39.868s 00:32:47.762 sys 0m8.309s 00:32:47.762 10:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.762 10:11:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.762 ************************************ 00:32:47.762 END TEST nvmf_interrupt 00:32:47.762 ************************************ 00:32:47.762 00:32:47.762 real 27m38.420s 00:32:47.762 user 57m0.600s 00:32:47.762 sys 9m20.884s 00:32:47.762 10:11:20 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.762 10:11:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.762 ************************************ 00:32:47.762 END TEST nvmf_tcp 00:32:47.762 ************************************ 00:32:47.762 10:11:20 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:47.762 10:11:20 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.762 10:11:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.762 10:11:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.762 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.762 ************************************ 00:32:47.762 START TEST spdkcli_nvmf_tcp 00:32:47.762 ************************************ 00:32:47.762 10:11:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.762 * Looking for test storage... 00:32:47.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:47.762 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.763 --rc genhtml_branch_coverage=1 00:32:47.763 --rc genhtml_function_coverage=1 00:32:47.763 --rc genhtml_legend=1 00:32:47.763 --rc geninfo_all_blocks=1 00:32:47.763 --rc geninfo_unexecuted_blocks=1 00:32:47.763 00:32:47.763 ' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.763 --rc genhtml_branch_coverage=1 00:32:47.763 --rc genhtml_function_coverage=1 00:32:47.763 --rc genhtml_legend=1 00:32:47.763 --rc geninfo_all_blocks=1 00:32:47.763 --rc geninfo_unexecuted_blocks=1 00:32:47.763 00:32:47.763 ' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.763 --rc genhtml_branch_coverage=1 00:32:47.763 --rc genhtml_function_coverage=1 00:32:47.763 --rc genhtml_legend=1 00:32:47.763 --rc geninfo_all_blocks=1 00:32:47.763 --rc geninfo_unexecuted_blocks=1 00:32:47.763 00:32:47.763 ' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.763 --rc genhtml_branch_coverage=1 00:32:47.763 --rc genhtml_function_coverage=1 00:32:47.763 --rc genhtml_legend=1 00:32:47.763 --rc geninfo_all_blocks=1 00:32:47.763 --rc geninfo_unexecuted_blocks=1 00:32:47.763 00:32:47.763 ' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2897442 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2897442 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2897442 ']' 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.763 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.763 [2024-11-20 10:11:21.202280] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:32:47.763 [2024-11-20 10:11:21.202330] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897442 ] 00:32:47.763 [2024-11-20 10:11:21.275716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.763 [2024-11-20 10:11:21.316159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.763 [2024-11-20 10:11:21.316161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.022 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.022 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.023 10:11:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:48.023 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:48.023 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:48.023 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:48.023 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:48.023 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:48.023 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:48.023 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:48.023 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:48.023 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:48.023 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:48.023 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:48.023 ' 00:32:50.591 [2024-11-20 10:11:24.150548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.965 [2024-11-20 10:11:25.495028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:54.498 [2024-11-20 10:11:27.986589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:57.031 [2024-11-20 10:11:30.145477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:58.407 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:58.407 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:58.407 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.407 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.407 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:58.407 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:58.407 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:58.407 10:11:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.975 10:11:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:58.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:58.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:58.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:58.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:58.975 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:58.975 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:58.975 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:58.975 ' 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:05.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:05.546 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:05.546 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:05.546 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2897442 ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897442' 00:33:05.546 killing process with pid 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2897442 ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2897442 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2897442 ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2897442 00:33:05.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2897442) - No such process 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2897442 is not found' 00:33:05.546 Process with pid 2897442 is not found 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:05.546 00:33:05.546 real 0m17.366s 00:33:05.546 user 0m38.267s 00:33:05.546 sys 0m0.787s 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.546 10:11:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:05.546 ************************************ 00:33:05.546 END TEST spdkcli_nvmf_tcp 00:33:05.546 ************************************ 00:33:05.546 10:11:38 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:05.546 10:11:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:05.546 10:11:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.546 10:11:38 -- common/autotest_common.sh@10 -- # set +x 00:33:05.546 ************************************ 00:33:05.546 START TEST nvmf_identify_passthru 00:33:05.546 ************************************ 00:33:05.546 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:05.546 * Looking for test storage... 00:33:05.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.546 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.546 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.546 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.546 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:05.546 10:11:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.547 --rc genhtml_branch_coverage=1 00:33:05.547 --rc genhtml_function_coverage=1 00:33:05.547 --rc genhtml_legend=1 00:33:05.547 --rc geninfo_all_blocks=1 00:33:05.547 --rc geninfo_unexecuted_blocks=1 00:33:05.547 00:33:05.547 ' 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.547 --rc genhtml_branch_coverage=1 00:33:05.547 --rc genhtml_function_coverage=1 00:33:05.547 --rc genhtml_legend=1 00:33:05.547 --rc geninfo_all_blocks=1 00:33:05.547 --rc geninfo_unexecuted_blocks=1 00:33:05.547 00:33:05.547 ' 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.547 --rc genhtml_branch_coverage=1 00:33:05.547 --rc genhtml_function_coverage=1 00:33:05.547 --rc genhtml_legend=1 00:33:05.547 --rc geninfo_all_blocks=1 00:33:05.547 --rc geninfo_unexecuted_blocks=1 00:33:05.547 00:33:05.547 ' 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.547 --rc genhtml_branch_coverage=1 00:33:05.547 --rc genhtml_function_coverage=1 00:33:05.547 --rc genhtml_legend=1 00:33:05.547 --rc geninfo_all_blocks=1 00:33:05.547 --rc geninfo_unexecuted_blocks=1 00:33:05.547 00:33:05.547 ' 00:33:05.547 10:11:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:05.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.547 10:11:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:05.547 10:11:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.547 10:11:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.547 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.547 10:11:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.548 10:11:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:10.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:10.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:10.823 Found net devices under 0000:86:00.0: cvl_0_0 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:10.823 Found net devices under 0000:86:00.1: cvl_0_1 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.823 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.824 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:33:11.083 00:33:11.083 --- 10.0.0.2 ping statistics --- 00:33:11.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.083 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:11.083 00:33:11.083 --- 10.0.0.1 ping statistics --- 00:33:11.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.083 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:11.083 10:11:44 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:11.083 10:11:44 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:11.083 10:11:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:16.354 10:11:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:33:16.354 10:11:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:16.354 10:11:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:16.354 10:11:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2904926 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:20.541 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2904926 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2904926 ']' 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.541 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.799 [2024-11-20 10:11:54.121246] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:33:20.799 [2024-11-20 10:11:54.121294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.799 [2024-11-20 10:11:54.180647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.799 [2024-11-20 10:11:54.223978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.799 [2024-11-20 10:11:54.224014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.799 [2024-11-20 10:11:54.224024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.799 [2024-11-20 10:11:54.224031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.799 [2024-11-20 10:11:54.224037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.799 [2024-11-20 10:11:54.225643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.799 [2024-11-20 10:11:54.225750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.799 [2024-11-20 10:11:54.225858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.799 [2024-11-20 10:11:54.225859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:20.799 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.799 INFO: Log level set to 20 00:33:20.799 INFO: Requests: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "method": "nvmf_set_config", 00:33:20.799 "id": 1, 00:33:20.799 "params": { 00:33:20.799 "admin_cmd_passthru": { 00:33:20.799 "identify_ctrlr": true 00:33:20.799 } 00:33:20.799 } 00:33:20.799 } 00:33:20.799 00:33:20.799 INFO: response: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "id": 1, 00:33:20.799 "result": true 00:33:20.799 } 00:33:20.799 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.799 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.799 INFO: Setting log level to 20 00:33:20.799 INFO: Setting log level to 20 00:33:20.799 INFO: Log level set to 20 00:33:20.799 INFO: Log level set to 20 00:33:20.799 INFO: Requests: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "method": "framework_start_init", 00:33:20.799 "id": 1 00:33:20.799 } 00:33:20.799 00:33:20.799 INFO: Requests: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "method": "framework_start_init", 00:33:20.799 "id": 1 00:33:20.799 } 00:33:20.799 00:33:20.799 [2024-11-20 10:11:54.348338] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:20.799 INFO: response: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "id": 1, 00:33:20.799 "result": true 00:33:20.799 } 00:33:20.799 00:33:20.799 INFO: response: 00:33:20.799 { 00:33:20.799 "jsonrpc": "2.0", 00:33:20.799 "id": 1, 00:33:20.799 "result": true 00:33:20.799 } 00:33:20.799 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.799 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:20.799 INFO: Setting log level to 40 00:33:20.799 INFO: Setting log level to 40 00:33:20.799 INFO: Setting log level to 40 00:33:20.799 [2024-11-20 10:11:54.361681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.799 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.799 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:21.057 10:11:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:21.057 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.057 10:11:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 Nvme0n1 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 [2024-11-20 10:11:57.271618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 [ 00:33:24.339 { 00:33:24.339 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:24.339 "subtype": "Discovery", 00:33:24.339 "listen_addresses": [], 00:33:24.339 "allow_any_host": true, 00:33:24.339 "hosts": [] 00:33:24.339 }, 00:33:24.339 { 00:33:24.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.339 "subtype": "NVMe", 00:33:24.339 "listen_addresses": [ 00:33:24.339 { 00:33:24.339 "trtype": "TCP", 00:33:24.339 "adrfam": "IPv4", 00:33:24.339 "traddr": "10.0.0.2", 00:33:24.339 "trsvcid": "4420" 00:33:24.339 } 00:33:24.339 ], 00:33:24.339 "allow_any_host": true, 00:33:24.339 "hosts": [], 00:33:24.339 "serial_number": "SPDK00000000000001", 00:33:24.339 "model_number": "SPDK bdev Controller", 00:33:24.339 "max_namespaces": 1, 00:33:24.339 "min_cntlid": 1, 00:33:24.339 "max_cntlid": 65519, 00:33:24.339 "namespaces": [ 00:33:24.339 { 00:33:24.339 "nsid": 1, 00:33:24.339 "bdev_name": "Nvme0n1", 00:33:24.339 "name": "Nvme0n1", 00:33:24.339 "nguid": "2478F33698AA401B811851F9EA3F94D1", 00:33:24.339 "uuid": "2478f336-98aa-401b-8118-51f9ea3f94d1" 00:33:24.339 } 00:33:24.339 ] 00:33:24.339 } 00:33:24.339 ] 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:24.339 10:11:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.339 rmmod nvme_tcp 00:33:24.339 rmmod nvme_fabrics 00:33:24.339 rmmod nvme_keyring 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2904926 ']' 00:33:24.339 10:11:57 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2904926 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2904926 ']' 00:33:24.339 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2904926 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904926 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904926' 00:33:24.340 killing process with pid 2904926 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2904926 00:33:24.340 10:11:57 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2904926 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.868 10:11:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.868 10:11:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.868 10:11:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.776 10:12:01 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.776 00:33:28.776 real 0m23.623s 00:33:28.776 user 0m30.411s 00:33:28.776 sys 0m6.365s 00:33:28.776 10:12:02 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.776 10:12:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:28.776 ************************************ 00:33:28.776 END TEST nvmf_identify_passthru 00:33:28.776 ************************************ 00:33:28.776 10:12:02 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.776 10:12:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:28.776 10:12:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.776 10:12:02 -- common/autotest_common.sh@10 -- # set +x 00:33:28.776 ************************************ 00:33:28.776 START TEST nvmf_dif 00:33:28.776 ************************************ 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.776 * Looking for test storage... 00:33:28.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.776 --rc genhtml_branch_coverage=1 00:33:28.776 --rc genhtml_function_coverage=1 00:33:28.776 --rc genhtml_legend=1 00:33:28.776 --rc geninfo_all_blocks=1 00:33:28.776 --rc geninfo_unexecuted_blocks=1 00:33:28.776 00:33:28.776 ' 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.776 --rc genhtml_branch_coverage=1 00:33:28.776 --rc genhtml_function_coverage=1 00:33:28.776 --rc genhtml_legend=1 00:33:28.776 --rc geninfo_all_blocks=1 00:33:28.776 --rc geninfo_unexecuted_blocks=1 00:33:28.776 00:33:28.776 ' 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.776 --rc genhtml_branch_coverage=1 00:33:28.776 --rc genhtml_function_coverage=1 00:33:28.776 --rc genhtml_legend=1 00:33:28.776 --rc geninfo_all_blocks=1 00:33:28.776 --rc geninfo_unexecuted_blocks=1 00:33:28.776 00:33:28.776 ' 00:33:28.776 10:12:02 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:28.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.776 --rc genhtml_branch_coverage=1 00:33:28.776 --rc genhtml_function_coverage=1 00:33:28.776 --rc genhtml_legend=1 00:33:28.776 --rc geninfo_all_blocks=1 00:33:28.776 --rc geninfo_unexecuted_blocks=1 00:33:28.776 00:33:28.776 ' 00:33:28.776 10:12:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.776 10:12:02 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.776 10:12:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.777 10:12:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.777 10:12:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.777 10:12:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.777 10:12:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.777 10:12:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:28.777 10:12:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.777 10:12:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:28.777 10:12:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:28.777 10:12:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:28.777 10:12:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:28.777 10:12:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.777 10:12:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.777 10:12:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.777 10:12:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.777 10:12:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:35.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:35.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:35.344 Found net devices under 0000:86:00.0: cvl_0_0 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:35.344 Found net devices under 0000:86:00.1: cvl_0_1 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:35.344 10:12:07 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.345 10:12:07 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:35.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:33:35.345 00:33:35.345 --- 10.0.0.2 ping statistics --- 00:33:35.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.345 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:33:35.345 00:33:35.345 --- 10.0.0.1 ping statistics --- 00:33:35.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.345 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:35.345 10:12:08 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.247 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:37.247 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:37.247 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:37.506 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:37.506 10:12:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:37.506 10:12:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:37.506 10:12:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:37.506 10:12:11 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:37.506 10:12:11 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2910912 00:33:37.506 10:12:11 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2910912 00:33:37.506 10:12:11 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2910912 ']' 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.506 10:12:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:37.765 [2024-11-20 10:12:11.090711] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:33:37.765 [2024-11-20 10:12:11.090756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.765 [2024-11-20 10:12:11.169331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.765 [2024-11-20 10:12:11.214313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.765 [2024-11-20 10:12:11.214351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.765 [2024-11-20 10:12:11.214358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.765 [2024-11-20 10:12:11.214365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.765 [2024-11-20 10:12:11.214370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.765 [2024-11-20 10:12:11.214948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:38.702 10:12:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 10:12:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.702 10:12:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:38.702 10:12:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 [2024-11-20 10:12:11.960330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.702 10:12:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.702 10:12:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 ************************************ 00:33:38.702 START TEST fio_dif_1_default 00:33:38.702 ************************************ 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:38.702 10:12:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 bdev_null0 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:38.702 [2024-11-20 10:12:12.036664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:38.702 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:38.702 { 00:33:38.702 "params": { 00:33:38.702 "name": "Nvme$subsystem", 00:33:38.702 "trtype": "$TEST_TRANSPORT", 00:33:38.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:38.703 "adrfam": "ipv4", 00:33:38.703 "trsvcid": "$NVMF_PORT", 00:33:38.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:38.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:38.703 "hdgst": ${hdgst:-false}, 00:33:38.703 "ddgst": ${ddgst:-false} 00:33:38.703 }, 00:33:38.703 "method": "bdev_nvme_attach_controller" 00:33:38.703 } 00:33:38.703 EOF 00:33:38.703 )") 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:38.703 "params": { 00:33:38.703 "name": "Nvme0", 00:33:38.703 "trtype": "tcp", 00:33:38.703 "traddr": "10.0.0.2", 00:33:38.703 "adrfam": "ipv4", 00:33:38.703 "trsvcid": "4420", 00:33:38.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:38.703 "hdgst": false, 00:33:38.703 "ddgst": false 00:33:38.703 }, 00:33:38.703 "method": "bdev_nvme_attach_controller" 00:33:38.703 }' 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:38.703 10:12:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.962 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:38.962 fio-3.35 00:33:38.962 Starting 1 thread 00:33:51.171 00:33:51.171 filename0: (groupid=0, jobs=1): err= 0: pid=2911380: Wed Nov 20 10:12:22 2024 00:33:51.171 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:33:51.171 slat (nsec): min=5671, max=26314, avg=6035.21, stdev=1001.55 00:33:51.171 clat (usec): min=40855, max=44352, avg=41033.49, stdev=283.46 00:33:51.171 lat (usec): min=40861, max=44379, avg=41039.53, stdev=283.86 00:33:51.171 clat percentiles (usec): 00:33:51.171 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:51.171 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:51.171 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:51.171 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:33:51.171 | 99.99th=[44303] 00:33:51.171 bw ( KiB/s): min= 384, max= 416, per=99.55%, avg=388.80, stdev=11.72, samples=20 00:33:51.171 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:51.171 lat (msec) : 50=100.00% 00:33:51.171 cpu : usr=92.24%, sys=7.52%, ctx=5, majf=0, minf=0 00:33:51.171 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.171 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.171 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:51.171 00:33:51.171 Run status group 0 (all jobs): 00:33:51.171 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10017-10017msec 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.171 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 00:33:51.172 real 0m11.089s 00:33:51.172 user 0m16.115s 00:33:51.172 sys 0m1.034s 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 ************************************ 00:33:51.172 END TEST fio_dif_1_default 00:33:51.172 ************************************ 00:33:51.172 10:12:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:51.172 10:12:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.172 10:12:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 ************************************ 00:33:51.172 START TEST fio_dif_1_multi_subsystems 00:33:51.172 ************************************ 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 bdev_null0 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 [2024-11-20 10:12:23.191453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 bdev_null1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.172 { 00:33:51.172 "params": { 00:33:51.172 "name": "Nvme$subsystem", 00:33:51.172 "trtype": "$TEST_TRANSPORT", 00:33:51.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.172 "adrfam": "ipv4", 00:33:51.172 "trsvcid": "$NVMF_PORT", 00:33:51.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.172 "hdgst": ${hdgst:-false}, 00:33:51.172 "ddgst": ${ddgst:-false} 00:33:51.172 }, 00:33:51.172 "method": "bdev_nvme_attach_controller" 00:33:51.172 } 00:33:51.172 EOF 00:33:51.172 )") 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.172 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.172 { 00:33:51.172 "params": { 00:33:51.172 "name": "Nvme$subsystem", 00:33:51.172 "trtype": "$TEST_TRANSPORT", 00:33:51.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.172 "adrfam": "ipv4", 00:33:51.172 "trsvcid": "$NVMF_PORT", 00:33:51.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.172 "hdgst": ${hdgst:-false}, 00:33:51.172 "ddgst": ${ddgst:-false} 00:33:51.172 }, 00:33:51.172 "method": "bdev_nvme_attach_controller" 00:33:51.172 } 00:33:51.173 EOF 00:33:51.173 )") 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.173 "params": { 00:33:51.173 "name": "Nvme0", 00:33:51.173 "trtype": "tcp", 00:33:51.173 "traddr": "10.0.0.2", 00:33:51.173 "adrfam": "ipv4", 00:33:51.173 "trsvcid": "4420", 00:33:51.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.173 "hdgst": false, 00:33:51.173 "ddgst": false 00:33:51.173 }, 00:33:51.173 "method": "bdev_nvme_attach_controller" 00:33:51.173 },{ 00:33:51.173 "params": { 00:33:51.173 "name": "Nvme1", 00:33:51.173 "trtype": "tcp", 00:33:51.173 "traddr": "10.0.0.2", 00:33:51.173 "adrfam": "ipv4", 00:33:51.173 "trsvcid": "4420", 00:33:51.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.173 "hdgst": false, 00:33:51.173 "ddgst": false 00:33:51.173 }, 00:33:51.173 "method": "bdev_nvme_attach_controller" 00:33:51.173 }' 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.173 10:12:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.173 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:51.173 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:51.173 fio-3.35 00:33:51.173 Starting 2 threads 00:34:01.192 00:34:01.192 filename0: (groupid=0, jobs=1): err= 0: pid=2913320: Wed Nov 20 10:12:34 2024 00:34:01.192 read: IOPS=112, BW=449KiB/s (460kB/s)(4496KiB/10003msec) 00:34:01.192 slat (nsec): min=5816, max=62992, avg=10109.02, stdev=6941.24 00:34:01.192 clat (usec): min=394, max=42687, avg=35565.30, stdev=14326.03 00:34:01.192 lat (usec): min=401, max=42725, avg=35575.41, stdev=14325.64 00:34:01.192 clat percentiles (usec): 00:34:01.192 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 437], 20.00th=[41157], 00:34:01.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:01.192 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:01.192 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:01.192 | 99.99th=[42730] 00:34:01.192 bw ( KiB/s): min= 383, max= 544, per=53.67%, avg=449.63, stdev=49.50, samples=19 00:34:01.192 iops : min= 95, max= 136, avg=112.37, stdev=12.43, samples=19 00:34:01.192 lat (usec) : 500=13.88%, 750=0.18%, 1000=0.18% 00:34:01.192 lat (msec) : 50=85.77% 00:34:01.192 cpu : usr=98.89%, sys=0.81%, ctx=36, majf=0, minf=126 00:34:01.192 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.192 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.192 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:01.192 filename1: (groupid=0, jobs=1): err= 0: pid=2913321: Wed Nov 20 10:12:34 2024 00:34:01.192 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10040msec) 00:34:01.192 slat (nsec): min=5938, max=78104, avg=11502.56, stdev=8659.45 00:34:01.192 clat (usec): min=589, max=42642, avg=41108.98, stdev=2637.98 00:34:01.192 lat (usec): min=636, max=42680, avg=41120.49, stdev=2635.96 00:34:01.192 clat percentiles (usec): 00:34:01.192 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:01.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:01.192 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:01.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:01.192 | 99.99th=[42730] 00:34:01.192 bw ( KiB/s): min= 352, max= 416, per=46.38%, avg=388.75, stdev=15.68, samples=20 00:34:01.192 iops : min= 88, max= 104, avg=97.15, stdev= 3.94, samples=20 00:34:01.192 lat (usec) : 750=0.41% 00:34:01.192 lat (msec) : 50=99.59% 00:34:01.192 cpu : usr=97.54%, sys=2.19%, ctx=13, majf=0, minf=183 00:34:01.192 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.192 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.192 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:01.192 00:34:01.192 Run status group 0 (all jobs): 00:34:01.192 READ: bw=837KiB/s (857kB/s), 389KiB/s-449KiB/s (398kB/s-460kB/s), io=8400KiB (8602kB), run=10003-10040msec 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.192 00:34:01.192 real 0m11.517s 00:34:01.192 user 0m27.245s 00:34:01.192 sys 0m0.655s 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.192 10:12:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:01.192 ************************************ 00:34:01.192 END TEST fio_dif_1_multi_subsystems 00:34:01.192 ************************************ 00:34:01.193 10:12:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:01.193 10:12:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.193 10:12:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.193 10:12:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.193 ************************************ 00:34:01.193 START TEST fio_dif_rand_params 00:34:01.193 ************************************ 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.193 bdev_null0 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.193 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:01.452 [2024-11-20 10:12:34.787886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:01.452 { 00:34:01.452 "params": { 00:34:01.452 "name": "Nvme$subsystem", 00:34:01.452 "trtype": "$TEST_TRANSPORT", 00:34:01.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.452 "adrfam": "ipv4", 00:34:01.452 "trsvcid": "$NVMF_PORT", 00:34:01.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.452 "hdgst": ${hdgst:-false}, 00:34:01.452 "ddgst": ${ddgst:-false} 00:34:01.452 }, 00:34:01.452 "method": "bdev_nvme_attach_controller" 00:34:01.452 } 00:34:01.452 EOF 00:34:01.452 )") 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:01.452 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:01.453 "params": { 00:34:01.453 "name": "Nvme0", 00:34:01.453 "trtype": "tcp", 00:34:01.453 "traddr": "10.0.0.2", 00:34:01.453 "adrfam": "ipv4", 00:34:01.453 "trsvcid": "4420", 00:34:01.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.453 "hdgst": false, 00:34:01.453 "ddgst": false 00:34:01.453 }, 00:34:01.453 "method": "bdev_nvme_attach_controller" 00:34:01.453 }' 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.453 10:12:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.712 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:01.712 ... 00:34:01.712 fio-3.35 00:34:01.712 Starting 3 threads 00:34:08.314 00:34:08.314 filename0: (groupid=0, jobs=1): err= 0: pid=2915225: Wed Nov 20 10:12:40 2024 00:34:08.314 read: IOPS=334, BW=41.9MiB/s (43.9MB/s)(211MiB/5046msec) 00:34:08.314 slat (nsec): min=6032, max=26610, avg=10955.14, stdev=1712.37 00:34:08.314 clat (usec): min=5304, max=87846, avg=8919.30, stdev=5288.71 00:34:08.314 lat (usec): min=5312, max=87854, avg=8930.26, stdev=5288.63 00:34:08.314 clat percentiles (usec): 00:34:08.314 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7701], 00:34:08.314 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:34:08.314 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[ 9896], 00:34:08.314 | 99.00th=[46400], 99.50th=[49021], 99.90th=[87557], 99.95th=[87557], 00:34:08.314 | 99.99th=[87557] 00:34:08.314 bw ( KiB/s): min=24064, max=47872, per=35.18%, avg=43187.20, stdev=6881.96, samples=10 00:34:08.314 iops : min= 188, max= 374, avg=337.40, stdev=53.77, samples=10 00:34:08.314 lat (msec) : 10=96.39%, 20=2.43%, 50=0.83%, 100=0.36% 00:34:08.314 cpu : usr=94.65%, sys=5.07%, ctx=8, majf=0, minf=0 00:34:08.314 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 issued rwts: total=1690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.314 filename0: (groupid=0, jobs=1): err= 0: pid=2915226: Wed Nov 20 10:12:40 2024 00:34:08.314 read: IOPS=310, BW=38.9MiB/s (40.8MB/s)(195MiB/5004msec) 00:34:08.314 slat (nsec): min=6068, max=24818, avg=10889.60, stdev=1692.69 00:34:08.314 clat (usec): min=3452, max=49297, avg=9633.19, stdev=3972.87 00:34:08.314 lat (usec): min=3458, max=49312, avg=9644.08, stdev=3973.00 00:34:08.314 clat percentiles (usec): 00:34:08.314 | 1.00th=[ 3523], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8225], 00:34:08.314 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9896], 00:34:08.314 | 70.00th=[10159], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:34:08.314 | 99.00th=[12649], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:34:08.314 | 99.99th=[49546] 00:34:08.314 bw ( KiB/s): min=37632, max=46592, per=32.39%, avg=39764.30, stdev=2764.21, samples=10 00:34:08.314 iops : min= 294, max= 364, avg=310.60, stdev=21.64, samples=10 00:34:08.314 lat (msec) : 4=2.12%, 10=62.92%, 20=34.00%, 50=0.96% 00:34:08.314 cpu : usr=94.38%, sys=5.32%, ctx=10, majf=0, minf=9 00:34:08.314 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 issued rwts: total=1556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.314 filename0: (groupid=0, jobs=1): err= 0: pid=2915227: Wed Nov 20 10:12:40 2024 00:34:08.314 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5045msec) 00:34:08.314 slat (nsec): min=6018, max=23181, avg=10732.22, stdev=1567.71 00:34:08.314 clat (usec): min=3121, max=49836, avg=9456.32, stdev=3531.38 00:34:08.314 lat (usec): min=3127, max=49859, avg=9467.06, stdev=3531.56 00:34:08.314 clat percentiles (usec): 00:34:08.314 | 1.00th=[ 3589], 5.00th=[ 5997], 10.00th=[ 7635], 20.00th=[ 8455], 00:34:08.314 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:08.314 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:34:08.314 | 99.00th=[12256], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:34:08.314 | 99.99th=[50070] 00:34:08.314 bw ( KiB/s): min=35328, max=42752, per=33.17%, avg=40729.60, stdev=2187.10, samples=10 00:34:08.314 iops : min= 276, max= 334, avg=318.20, stdev=17.09, samples=10 00:34:08.314 lat (msec) : 4=2.51%, 10=69.32%, 20=27.48%, 50=0.69% 00:34:08.314 cpu : usr=94.92%, sys=4.80%, ctx=4, majf=0, minf=11 00:34:08.314 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:08.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.314 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:08.314 00:34:08.314 Run status group 0 (all jobs): 00:34:08.314 READ: bw=120MiB/s (126MB/s), 38.9MiB/s-41.9MiB/s (40.8MB/s-43.9MB/s), io=605MiB (634MB), run=5004-5046msec 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 bdev_null0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 [2024-11-20 10:12:40.994328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:08.314 10:12:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 bdev_null1 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.314 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.315 bdev_null2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.315 { 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme$subsystem", 00:34:08.315 "trtype": "$TEST_TRANSPORT", 00:34:08.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "$NVMF_PORT", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.315 "hdgst": ${hdgst:-false}, 00:34:08.315 "ddgst": ${ddgst:-false} 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 } 00:34:08.315 EOF 00:34:08.315 )") 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.315 { 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme$subsystem", 00:34:08.315 "trtype": "$TEST_TRANSPORT", 00:34:08.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "$NVMF_PORT", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.315 "hdgst": ${hdgst:-false}, 00:34:08.315 "ddgst": ${ddgst:-false} 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 } 00:34:08.315 EOF 00:34:08.315 )") 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.315 { 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme$subsystem", 00:34:08.315 "trtype": "$TEST_TRANSPORT", 00:34:08.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "$NVMF_PORT", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.315 "hdgst": ${hdgst:-false}, 00:34:08.315 "ddgst": ${ddgst:-false} 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 } 00:34:08.315 EOF 00:34:08.315 )") 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme0", 00:34:08.315 "trtype": "tcp", 00:34:08.315 "traddr": "10.0.0.2", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "4420", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.315 "hdgst": false, 00:34:08.315 "ddgst": false 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 },{ 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme1", 00:34:08.315 "trtype": "tcp", 00:34:08.315 "traddr": "10.0.0.2", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "4420", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.315 "hdgst": false, 00:34:08.315 "ddgst": false 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 },{ 00:34:08.315 "params": { 00:34:08.315 "name": "Nvme2", 00:34:08.315 "trtype": "tcp", 00:34:08.315 "traddr": "10.0.0.2", 00:34:08.315 "adrfam": "ipv4", 00:34:08.315 "trsvcid": "4420", 00:34:08.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:08.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:08.315 "hdgst": false, 00:34:08.315 "ddgst": false 00:34:08.315 }, 00:34:08.315 "method": "bdev_nvme_attach_controller" 00:34:08.315 }' 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.315 10:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.315 ... 00:34:08.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.316 ... 00:34:08.316 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.316 ... 00:34:08.316 fio-3.35 00:34:08.316 Starting 24 threads 00:34:20.513 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916488: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10012msec) 00:34:20.513 slat (nsec): min=7285, max=85642, avg=29420.14, stdev=18873.30 00:34:20.513 clat (usec): min=12249, max=52023, avg=29727.75, stdev=1310.72 00:34:20.513 lat (usec): min=12298, max=52040, avg=29757.17, stdev=1310.51 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[26608], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:34:20.513 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:34:20.513 | 99.99th=[52167] 00:34:20.513 bw ( KiB/s): min= 2059, max= 2176, per=4.17%, avg=2134.47, stdev=50.24, samples=19 00:34:20.513 iops : min= 514, max= 544, avg=533.58, stdev=12.62, samples=19 00:34:20.513 lat (msec) : 20=0.60%, 50=99.36%, 100=0.04% 00:34:20.513 cpu : usr=98.65%, sys=0.94%, ctx=18, majf=0, minf=9 00:34:20.513 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916489: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:20.513 slat (nsec): min=6876, max=71661, avg=34201.73, stdev=10617.04 00:34:20.513 clat (usec): min=13104, max=47603, avg=29729.07, stdev=1470.44 00:34:20.513 lat (usec): min=13144, max=47633, avg=29763.27, stdev=1470.48 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.513 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47449], 99.95th=[47449], 00:34:20.513 | 99.99th=[47449] 00:34:20.513 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.05, stdev=64.46, samples=19 00:34:20.513 iops : min= 512, max= 544, avg=530.47, stdev=16.08, samples=19 00:34:20.513 lat (msec) : 20=0.60%, 50=99.40% 00:34:20.513 cpu : usr=98.62%, sys=1.03%, ctx=11, majf=0, minf=9 00:34:20.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916490: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=530, BW=2124KiB/s (2174kB/s)(20.8MiB/10006msec) 00:34:20.513 slat (nsec): min=8093, max=96444, avg=35185.68, stdev=13156.16 00:34:20.513 clat (usec): min=11637, max=75802, avg=29821.61, stdev=2470.36 00:34:20.513 lat (usec): min=11646, max=75837, avg=29856.80, stdev=2471.05 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.513 | 99.00th=[30802], 99.50th=[37487], 99.90th=[67634], 99.95th=[67634], 00:34:20.513 | 99.99th=[76022] 00:34:20.513 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2115.11, stdev=78.10, samples=19 00:34:20.513 iops : min= 480, max= 544, avg=528.74, stdev=19.50, samples=19 00:34:20.513 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:34:20.513 cpu : usr=98.47%, sys=1.15%, ctx=16, majf=0, minf=9 00:34:20.513 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916491: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:20.513 slat (nsec): min=5315, max=84257, avg=30846.56, stdev=18166.98 00:34:20.513 clat (usec): min=12841, max=31953, avg=29687.49, stdev=1213.23 00:34:20.513 lat (usec): min=12858, max=31967, avg=29718.34, stdev=1214.36 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[27132], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.513 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:34:20.513 | 99.99th=[31851] 00:34:20.513 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2135.32, stdev=61.54, samples=19 00:34:20.513 iops : min= 510, max= 544, avg=533.79, stdev=15.45, samples=19 00:34:20.513 lat (msec) : 20=0.60%, 50=99.40% 00:34:20.513 cpu : usr=98.41%, sys=1.21%, ctx=18, majf=0, minf=9 00:34:20.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916492: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10005msec) 00:34:20.513 slat (nsec): min=7252, max=79483, avg=22512.33, stdev=15910.91 00:34:20.513 clat (usec): min=9009, max=30968, avg=29627.74, stdev=1805.70 00:34:20.513 lat (usec): min=9026, max=31014, avg=29650.25, stdev=1806.13 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[18220], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.513 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[31065], 00:34:20.513 | 99.99th=[31065] 00:34:20.513 bw ( KiB/s): min= 2043, max= 2304, per=4.18%, avg=2141.79, stdev=72.18, samples=19 00:34:20.513 iops : min= 510, max= 576, avg=535.37, stdev=18.09, samples=19 00:34:20.513 lat (msec) : 10=0.30%, 20=0.90%, 50=98.81% 00:34:20.513 cpu : usr=98.44%, sys=1.19%, ctx=12, majf=0, minf=9 00:34:20.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916493: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:20.513 slat (nsec): min=6030, max=71339, avg=31567.82, stdev=12106.51 00:34:20.513 clat (usec): min=19096, max=33891, avg=29810.38, stdev=690.20 00:34:20.513 lat (usec): min=19129, max=33907, avg=29841.95, stdev=688.45 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.513 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.513 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:34:20.513 | 99.99th=[33817] 00:34:20.513 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2128.32, stdev=63.60, samples=19 00:34:20.513 iops : min= 510, max= 544, avg=532.00, stdev=15.93, samples=19 00:34:20.513 lat (msec) : 20=0.30%, 50=99.70% 00:34:20.513 cpu : usr=98.42%, sys=1.20%, ctx=18, majf=0, minf=9 00:34:20.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916494: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:34:20.513 slat (nsec): min=4887, max=66662, avg=33238.33, stdev=10762.83 00:34:20.513 clat (usec): min=13096, max=50457, avg=29741.41, stdev=1580.00 00:34:20.513 lat (usec): min=13116, max=50470, avg=29774.65, stdev=1579.90 00:34:20.513 clat percentiles (usec): 00:34:20.513 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.513 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.513 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.513 | 99.00th=[30540], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:34:20.513 | 99.99th=[50594] 00:34:20.513 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2121.84, stdev=77.51, samples=19 00:34:20.513 iops : min= 480, max= 544, avg=530.42, stdev=19.35, samples=19 00:34:20.513 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:34:20.513 cpu : usr=98.64%, sys=0.97%, ctx=13, majf=0, minf=9 00:34:20.513 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.513 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.513 filename0: (groupid=0, jobs=1): err= 0: pid=2916495: Wed Nov 20 10:12:52 2024 00:34:20.513 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:20.514 slat (nsec): min=7891, max=76210, avg=26976.98, stdev=12988.96 00:34:20.514 clat (usec): min=19434, max=34673, avg=29853.11, stdev=695.37 00:34:20.514 lat (usec): min=19442, max=34691, avg=29880.08, stdev=693.83 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.514 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.514 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.514 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:34:20.514 | 99.99th=[34866] 00:34:20.514 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2128.32, stdev=63.60, samples=19 00:34:20.514 iops : min= 510, max= 544, avg=532.00, stdev=15.93, samples=19 00:34:20.514 lat (msec) : 20=0.30%, 50=99.70% 00:34:20.514 cpu : usr=98.73%, sys=0.89%, ctx=12, majf=0, minf=9 00:34:20.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916496: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:34:20.514 slat (nsec): min=4186, max=74395, avg=32736.77, stdev=10737.83 00:34:20.514 clat (usec): min=13097, max=50448, avg=29745.48, stdev=1579.18 00:34:20.514 lat (usec): min=13115, max=50461, avg=29778.21, stdev=1579.10 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.514 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.514 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.514 | 99.00th=[30540], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:34:20.514 | 99.99th=[50594] 00:34:20.514 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2121.84, stdev=77.51, samples=19 00:34:20.514 iops : min= 480, max= 544, avg=530.42, stdev=19.35, samples=19 00:34:20.514 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:34:20.514 cpu : usr=98.61%, sys=1.01%, ctx=12, majf=0, minf=9 00:34:20.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916497: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:20.514 slat (nsec): min=5982, max=85641, avg=30264.13, stdev=18250.49 00:34:20.514 clat (usec): min=9506, max=52435, avg=29665.52, stdev=1433.36 00:34:20.514 lat (usec): min=9518, max=52454, avg=29695.79, stdev=1434.95 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[25297], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:20.514 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.514 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.514 | 99.00th=[30802], 99.50th=[31065], 99.90th=[32113], 99.95th=[46924], 00:34:20.514 | 99.99th=[52691] 00:34:20.514 bw ( KiB/s): min= 2043, max= 2192, per=4.17%, avg=2135.32, stdev=61.77, samples=19 00:34:20.514 iops : min= 510, max= 548, avg=533.79, stdev=15.50, samples=19 00:34:20.514 lat (msec) : 10=0.04%, 20=0.64%, 50=99.29%, 100=0.04% 00:34:20.514 cpu : usr=98.61%, sys=1.02%, ctx=11, majf=0, minf=9 00:34:20.514 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916498: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10005msec) 00:34:20.514 slat (nsec): min=7294, max=80728, avg=20137.99, stdev=12396.98 00:34:20.514 clat (usec): min=9002, max=33386, avg=29693.25, stdev=1801.51 00:34:20.514 lat (usec): min=9014, max=33411, avg=29713.39, stdev=1801.51 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[18220], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.514 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.514 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.514 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:34:20.514 | 99.99th=[33424] 00:34:20.514 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2141.79, stdev=71.69, samples=19 00:34:20.514 iops : min= 512, max= 576, avg=535.37, stdev=17.89, samples=19 00:34:20.514 lat (msec) : 10=0.26%, 20=0.93%, 50=98.81% 00:34:20.514 cpu : usr=98.57%, sys=1.07%, ctx=12, majf=0, minf=9 00:34:20.514 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916499: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:20.514 slat (nsec): min=5579, max=68275, avg=34499.08, stdev=10133.80 00:34:20.514 clat (usec): min=13098, max=52235, avg=29745.77, stdev=1520.62 00:34:20.514 lat (usec): min=13140, max=52251, avg=29780.27, stdev=1520.07 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.514 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.514 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.514 | 99.00th=[30540], 99.50th=[30802], 99.90th=[48497], 99.95th=[48497], 00:34:20.514 | 99.99th=[52167] 00:34:20.514 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2121.84, stdev=64.71, samples=19 00:34:20.514 iops : min= 512, max= 544, avg=530.42, stdev=16.15, samples=19 00:34:20.514 lat (msec) : 20=0.60%, 50=99.36%, 100=0.04% 00:34:20.514 cpu : usr=98.42%, sys=1.20%, ctx=12, majf=0, minf=9 00:34:20.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916500: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:20.514 slat (nsec): min=5888, max=68553, avg=33833.21, stdev=11161.45 00:34:20.514 clat (usec): min=13157, max=51723, avg=29727.55, stdev=1502.43 00:34:20.514 lat (usec): min=13194, max=51739, avg=29761.38, stdev=1502.42 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.514 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.514 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.514 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47973], 99.95th=[47973], 00:34:20.514 | 99.99th=[51643] 00:34:20.514 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.05, stdev=64.46, samples=19 00:34:20.514 iops : min= 512, max= 544, avg=530.47, stdev=16.08, samples=19 00:34:20.514 lat (msec) : 20=0.60%, 50=99.36%, 100=0.04% 00:34:20.514 cpu : usr=98.66%, sys=0.97%, ctx=12, majf=0, minf=9 00:34:20.514 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916501: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=547, BW=2189KiB/s (2242kB/s)(21.4MiB/10008msec) 00:34:20.514 slat (nsec): min=6516, max=96571, avg=19273.66, stdev=17044.66 00:34:20.514 clat (usec): min=10687, max=80050, avg=29155.98, stdev=4762.63 00:34:20.514 lat (usec): min=10697, max=80071, avg=29175.25, stdev=4760.84 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[16909], 5.00th=[20579], 10.00th=[22152], 20.00th=[26346], 00:34:20.514 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:34:20.514 | 70.00th=[30016], 80.00th=[30278], 90.00th=[33162], 95.00th=[38011], 00:34:20.514 | 99.00th=[39584], 99.50th=[41157], 99.90th=[68682], 99.95th=[68682], 00:34:20.514 | 99.99th=[80217] 00:34:20.514 bw ( KiB/s): min= 1891, max= 2416, per=4.27%, avg=2186.00, stdev=104.42, samples=19 00:34:20.514 iops : min= 472, max= 604, avg=546.42, stdev=26.22, samples=19 00:34:20.514 lat (msec) : 20=2.92%, 50=96.79%, 100=0.29% 00:34:20.514 cpu : usr=98.47%, sys=1.15%, ctx=15, majf=0, minf=9 00:34:20.514 IO depths : 1=0.1%, 2=0.2%, 4=3.0%, 8=80.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:34:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 complete : 0=0.0%, 4=89.0%, 8=9.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.514 issued rwts: total=5478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.514 filename1: (groupid=0, jobs=1): err= 0: pid=2916502: Wed Nov 20 10:12:52 2024 00:34:20.514 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:34:20.514 slat (nsec): min=4003, max=68355, avg=34754.50, stdev=10539.96 00:34:20.514 clat (usec): min=12883, max=54356, avg=29757.71, stdev=1611.40 00:34:20.514 lat (usec): min=12925, max=54368, avg=29792.46, stdev=1610.55 00:34:20.514 clat percentiles (usec): 00:34:20.514 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.514 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.514 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.514 | 99.00th=[30540], 99.50th=[30802], 99.90th=[50594], 99.95th=[50594], 00:34:20.514 | 99.99th=[54264] 00:34:20.514 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2121.84, stdev=77.51, samples=19 00:34:20.515 iops : min= 480, max= 544, avg=530.42, stdev=19.35, samples=19 00:34:20.515 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:34:20.515 cpu : usr=98.57%, sys=1.05%, ctx=17, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename1: (groupid=0, jobs=1): err= 0: pid=2916503: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=532, BW=2128KiB/s (2180kB/s)(20.8MiB/10013msec) 00:34:20.515 slat (nsec): min=5643, max=70108, avg=27731.91, stdev=12034.80 00:34:20.515 clat (usec): min=19086, max=40825, avg=29863.99, stdev=839.64 00:34:20.515 lat (usec): min=19124, max=40840, avg=29891.72, stdev=837.57 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.515 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.515 | 99.00th=[30802], 99.50th=[30802], 99.90th=[39060], 99.95th=[39060], 00:34:20.515 | 99.99th=[40633] 00:34:20.515 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2128.58, stdev=63.24, samples=19 00:34:20.515 iops : min= 512, max= 544, avg=532.11, stdev=15.78, samples=19 00:34:20.515 lat (msec) : 20=0.30%, 50=99.70% 00:34:20.515 cpu : usr=98.52%, sys=1.11%, ctx=13, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916504: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:20.515 slat (nsec): min=7796, max=64311, avg=21598.27, stdev=9945.42 00:34:20.515 clat (usec): min=9472, max=47447, avg=29793.74, stdev=1323.02 00:34:20.515 lat (usec): min=9480, max=47464, avg=29815.34, stdev=1323.10 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[26608], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.515 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:34:20.515 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:34:20.515 | 99.99th=[47449] 00:34:20.515 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2135.32, stdev=61.54, samples=19 00:34:20.515 iops : min= 510, max= 544, avg=533.79, stdev=15.45, samples=19 00:34:20.515 lat (msec) : 10=0.04%, 20=0.60%, 50=99.36% 00:34:20.515 cpu : usr=98.32%, sys=1.09%, ctx=94, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916505: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:20.515 slat (nsec): min=7335, max=67218, avg=24785.04, stdev=11933.74 00:34:20.515 clat (usec): min=18189, max=36202, avg=29898.65, stdev=730.65 00:34:20.515 lat (usec): min=18197, max=36219, avg=29923.44, stdev=729.12 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:34:20.515 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:34:20.515 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:34:20.515 | 99.99th=[36439] 00:34:20.515 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2128.32, stdev=58.64, samples=19 00:34:20.515 iops : min= 512, max= 544, avg=532.00, stdev=14.68, samples=19 00:34:20.515 lat (msec) : 20=0.30%, 50=99.70% 00:34:20.515 cpu : usr=98.79%, sys=0.84%, ctx=8, majf=0, minf=9 00:34:20.515 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916506: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10020msec) 00:34:20.515 slat (nsec): min=6796, max=54532, avg=17364.97, stdev=7798.55 00:34:20.515 clat (usec): min=10768, max=30984, avg=29755.74, stdev=1586.69 00:34:20.515 lat (usec): min=10776, max=31011, avg=29773.11, stdev=1587.49 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[23987], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.515 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:34:20.515 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:34:20.515 | 99.99th=[31065] 00:34:20.515 bw ( KiB/s): min= 2043, max= 2304, per=4.18%, avg=2137.10, stdev=73.32, samples=20 00:34:20.515 iops : min= 510, max= 576, avg=534.20, stdev=18.36, samples=20 00:34:20.515 lat (msec) : 20=0.86%, 50=99.14% 00:34:20.515 cpu : usr=98.23%, sys=1.26%, ctx=33, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916507: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:20.515 slat (nsec): min=5325, max=70065, avg=35186.68, stdev=10522.18 00:34:20.515 clat (usec): min=13015, max=47663, avg=29732.75, stdev=1480.16 00:34:20.515 lat (usec): min=13032, max=47678, avg=29767.94, stdev=1479.71 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.515 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.515 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47449], 99.95th=[47449], 00:34:20.515 | 99.99th=[47449] 00:34:20.515 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.05, stdev=64.46, samples=19 00:34:20.515 iops : min= 512, max= 544, avg=530.47, stdev=16.08, samples=19 00:34:20.515 lat (msec) : 20=0.60%, 50=99.40% 00:34:20.515 cpu : usr=98.51%, sys=1.12%, ctx=13, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916508: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:20.515 slat (nsec): min=6779, max=85225, avg=31109.18, stdev=18282.94 00:34:20.515 clat (usec): min=9483, max=45028, avg=29655.99, stdev=1298.04 00:34:20.515 lat (usec): min=9496, max=45043, avg=29687.10, stdev=1299.87 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[26870], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:20.515 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.515 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.515 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:34:20.515 | 99.99th=[44827] 00:34:20.515 bw ( KiB/s): min= 2043, max= 2176, per=4.17%, avg=2135.32, stdev=61.54, samples=19 00:34:20.515 iops : min= 510, max= 544, avg=533.79, stdev=15.45, samples=19 00:34:20.515 lat (msec) : 10=0.04%, 20=0.60%, 50=99.36% 00:34:20.515 cpu : usr=98.54%, sys=1.08%, ctx=13, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916509: Wed Nov 20 10:12:52 2024 00:34:20.515 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10005msec) 00:34:20.515 slat (nsec): min=7283, max=79609, avg=20957.52, stdev=13754.83 00:34:20.515 clat (usec): min=9045, max=30984, avg=29666.21, stdev=1805.05 00:34:20.515 lat (usec): min=9062, max=31020, avg=29687.17, stdev=1804.90 00:34:20.515 clat percentiles (usec): 00:34:20.515 | 1.00th=[17957], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:34:20.515 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:20.515 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.515 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:20.515 | 99.99th=[31065] 00:34:20.515 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2141.79, stdev=71.69, samples=19 00:34:20.515 iops : min= 512, max= 576, avg=535.37, stdev=17.89, samples=19 00:34:20.515 lat (msec) : 10=0.26%, 20=0.93%, 50=98.81% 00:34:20.515 cpu : usr=98.74%, sys=0.88%, ctx=23, majf=0, minf=9 00:34:20.515 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:20.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.515 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.515 filename2: (groupid=0, jobs=1): err= 0: pid=2916510: Wed Nov 20 10:12:52 2024 00:34:20.516 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10005msec) 00:34:20.516 slat (nsec): min=7563, max=85902, avg=30117.74, stdev=18258.94 00:34:20.516 clat (usec): min=17676, max=32382, avg=29728.88, stdev=786.72 00:34:20.516 lat (usec): min=17696, max=32425, avg=29759.00, stdev=788.50 00:34:20.516 clat percentiles (usec): 00:34:20.516 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.516 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.516 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:20.516 | 99.00th=[31065], 99.50th=[31851], 99.90th=[32375], 99.95th=[32375], 00:34:20.516 | 99.99th=[32375] 00:34:20.516 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2128.84, stdev=63.44, samples=19 00:34:20.516 iops : min= 512, max= 544, avg=532.21, stdev=15.86, samples=19 00:34:20.516 lat (msec) : 20=0.30%, 50=99.70% 00:34:20.516 cpu : usr=98.47%, sys=1.15%, ctx=15, majf=0, minf=9 00:34:20.516 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.516 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.516 filename2: (groupid=0, jobs=1): err= 0: pid=2916511: Wed Nov 20 10:12:52 2024 00:34:20.516 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:34:20.516 slat (usec): min=4, max=104, avg=34.70, stdev=12.93 00:34:20.516 clat (usec): min=13036, max=51110, avg=29730.24, stdev=1607.37 00:34:20.516 lat (usec): min=13056, max=51122, avg=29764.94, stdev=1607.06 00:34:20.516 clat percentiles (usec): 00:34:20.516 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:20.516 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:20.516 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:20.516 | 99.00th=[30540], 99.50th=[30802], 99.90th=[51119], 99.95th=[51119], 00:34:20.516 | 99.99th=[51119] 00:34:20.516 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2121.84, stdev=77.51, samples=19 00:34:20.516 iops : min= 480, max= 544, avg=530.42, stdev=19.35, samples=19 00:34:20.516 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:34:20.516 cpu : usr=98.70%, sys=0.93%, ctx=13, majf=0, minf=9 00:34:20.516 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:20.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.516 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.516 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:20.516 00:34:20.516 Run status group 0 (all jobs): 00:34:20.516 READ: bw=50.0MiB/s (52.4MB/s), 2124KiB/s-2189KiB/s (2174kB/s-2242kB/s), io=501MiB (525MB), run=10005-10020msec 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 bdev_null0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 [2024-11-20 10:12:52.706580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 bdev_null1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:20.516 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:20.517 { 00:34:20.517 "params": { 00:34:20.517 "name": "Nvme$subsystem", 00:34:20.517 "trtype": "$TEST_TRANSPORT", 00:34:20.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.517 "adrfam": "ipv4", 00:34:20.517 "trsvcid": "$NVMF_PORT", 00:34:20.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.517 "hdgst": ${hdgst:-false}, 00:34:20.517 "ddgst": ${ddgst:-false} 00:34:20.517 }, 00:34:20.517 "method": "bdev_nvme_attach_controller" 00:34:20.517 } 00:34:20.517 EOF 00:34:20.517 )") 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:20.517 { 00:34:20.517 "params": { 00:34:20.517 "name": "Nvme$subsystem", 00:34:20.517 "trtype": "$TEST_TRANSPORT", 00:34:20.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.517 "adrfam": "ipv4", 00:34:20.517 "trsvcid": "$NVMF_PORT", 00:34:20.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.517 "hdgst": ${hdgst:-false}, 00:34:20.517 "ddgst": ${ddgst:-false} 00:34:20.517 }, 00:34:20.517 "method": "bdev_nvme_attach_controller" 00:34:20.517 } 00:34:20.517 EOF 00:34:20.517 )") 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:20.517 "params": { 00:34:20.517 "name": "Nvme0", 00:34:20.517 "trtype": "tcp", 00:34:20.517 "traddr": "10.0.0.2", 00:34:20.517 "adrfam": "ipv4", 00:34:20.517 "trsvcid": "4420", 00:34:20.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:20.517 "hdgst": false, 00:34:20.517 "ddgst": false 00:34:20.517 }, 00:34:20.517 "method": "bdev_nvme_attach_controller" 00:34:20.517 },{ 00:34:20.517 "params": { 00:34:20.517 "name": "Nvme1", 00:34:20.517 "trtype": "tcp", 00:34:20.517 "traddr": "10.0.0.2", 00:34:20.517 "adrfam": "ipv4", 00:34:20.517 "trsvcid": "4420", 00:34:20.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:20.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:20.517 "hdgst": false, 00:34:20.517 "ddgst": false 00:34:20.517 }, 00:34:20.517 "method": "bdev_nvme_attach_controller" 00:34:20.517 }' 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:20.517 10:12:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.517 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:20.517 ... 00:34:20.517 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:20.517 ... 00:34:20.517 fio-3.35 00:34:20.517 Starting 4 threads 00:34:25.787 00:34:25.787 filename0: (groupid=0, jobs=1): err= 0: pid=2918455: Wed Nov 20 10:12:58 2024 00:34:25.787 read: IOPS=2839, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:34:25.787 slat (nsec): min=5768, max=69148, avg=10648.40, stdev=6923.00 00:34:25.787 clat (usec): min=1043, max=5033, avg=2785.23, stdev=408.95 00:34:25.787 lat (usec): min=1050, max=5041, avg=2795.87, stdev=409.23 00:34:25.787 clat percentiles (usec): 00:34:25.787 | 1.00th=[ 1663], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2442], 00:34:25.787 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2933], 00:34:25.787 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3392], 00:34:25.787 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4555], 99.95th=[ 4752], 00:34:25.787 | 99.99th=[ 5014] 00:34:25.787 bw ( KiB/s): min=21760, max=24432, per=26.64%, avg=22848.00, stdev=847.55, samples=9 00:34:25.787 iops : min= 2720, max= 3054, avg=2856.00, stdev=105.94, samples=9 00:34:25.787 lat (msec) : 2=3.13%, 4=96.16%, 10=0.71% 00:34:25.787 cpu : usr=96.28%, sys=3.38%, ctx=18, majf=0, minf=9 00:34:25.787 IO depths : 1=0.7%, 2=5.7%, 4=64.9%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 issued rwts: total=14204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.787 filename0: (groupid=0, jobs=1): err= 0: pid=2918456: Wed Nov 20 10:12:58 2024 00:34:25.787 read: IOPS=2593, BW=20.3MiB/s (21.2MB/s)(101MiB/5001msec) 00:34:25.787 slat (nsec): min=5821, max=69190, avg=10742.17, stdev=7213.23 00:34:25.787 clat (usec): min=826, max=5789, avg=3051.97, stdev=416.97 00:34:25.787 lat (usec): min=835, max=5800, avg=3062.71, stdev=416.76 00:34:25.787 clat percentiles (usec): 00:34:25.787 | 1.00th=[ 2040], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2868], 00:34:25.787 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:34:25.787 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3785], 00:34:25.787 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5342], 99.95th=[ 5538], 00:34:25.787 | 99.99th=[ 5800] 00:34:25.787 bw ( KiB/s): min=19735, max=21200, per=24.20%, avg=20759.89, stdev=493.22, samples=9 00:34:25.787 iops : min= 2466, max= 2650, avg=2594.89, stdev=61.88, samples=9 00:34:25.787 lat (usec) : 1000=0.02% 00:34:25.787 lat (msec) : 2=0.88%, 4=96.19%, 10=2.91% 00:34:25.787 cpu : usr=96.96%, sys=2.70%, ctx=7, majf=0, minf=9 00:34:25.787 IO depths : 1=0.2%, 2=2.6%, 4=70.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 issued rwts: total=12971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.787 filename1: (groupid=0, jobs=1): err= 0: pid=2918457: Wed Nov 20 10:12:58 2024 00:34:25.787 read: IOPS=2692, BW=21.0MiB/s (22.1MB/s)(105MiB/5002msec) 00:34:25.787 slat (nsec): min=5738, max=62447, avg=10801.04, stdev=7107.03 00:34:25.787 clat (usec): min=796, max=5416, avg=2939.14, stdev=408.20 00:34:25.787 lat (usec): min=803, max=5428, avg=2949.94, stdev=408.13 00:34:25.787 clat percentiles (usec): 00:34:25.787 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2671], 00:34:25.787 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:34:25.787 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3654], 00:34:25.787 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5145], 00:34:25.787 | 99.99th=[ 5407] 00:34:25.787 bw ( KiB/s): min=20544, max=22464, per=25.10%, avg=21528.22, stdev=625.47, samples=9 00:34:25.787 iops : min= 2568, max= 2808, avg=2691.00, stdev=78.20, samples=9 00:34:25.787 lat (usec) : 1000=0.03% 00:34:25.787 lat (msec) : 2=1.30%, 4=97.07%, 10=1.60% 00:34:25.787 cpu : usr=96.62%, sys=3.04%, ctx=7, majf=0, minf=9 00:34:25.787 IO depths : 1=0.3%, 2=3.9%, 4=67.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 issued rwts: total=13468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.787 filename1: (groupid=0, jobs=1): err= 0: pid=2918458: Wed Nov 20 10:12:58 2024 00:34:25.787 read: IOPS=2596, BW=20.3MiB/s (21.3MB/s)(101MiB/5001msec) 00:34:25.787 slat (nsec): min=5953, max=67988, avg=13274.34, stdev=8798.15 00:34:25.787 clat (usec): min=775, max=5365, avg=3043.53, stdev=408.12 00:34:25.787 lat (usec): min=786, max=5373, avg=3056.80, stdev=408.32 00:34:25.787 clat percentiles (usec): 00:34:25.787 | 1.00th=[ 2073], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2835], 00:34:25.787 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:34:25.787 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3785], 00:34:25.787 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5014], 99.95th=[ 5211], 00:34:25.787 | 99.99th=[ 5342] 00:34:25.787 bw ( KiB/s): min=20304, max=21456, per=24.19%, avg=20746.67, stdev=419.37, samples=9 00:34:25.787 iops : min= 2538, max= 2682, avg=2593.33, stdev=52.42, samples=9 00:34:25.787 lat (usec) : 1000=0.05% 00:34:25.787 lat (msec) : 2=0.71%, 4=96.21%, 10=3.03% 00:34:25.787 cpu : usr=97.04%, sys=2.64%, ctx=9, majf=0, minf=9 00:34:25.787 IO depths : 1=0.1%, 2=2.7%, 4=69.3%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.787 issued rwts: total=12983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:25.787 00:34:25.787 Run status group 0 (all jobs): 00:34:25.787 READ: bw=83.8MiB/s (87.8MB/s), 20.3MiB/s-22.2MiB/s (21.2MB/s-23.3MB/s), io=419MiB (439MB), run=5001-5002msec 00:34:25.787 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:25.787 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 00:34:25.788 real 0m24.435s 00:34:25.788 user 4m52.638s 00:34:25.788 sys 0m4.981s 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 ************************************ 00:34:25.788 END TEST fio_dif_rand_params 00:34:25.788 ************************************ 00:34:25.788 10:12:59 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:25.788 10:12:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.788 10:12:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 ************************************ 00:34:25.788 START TEST fio_dif_digest 00:34:25.788 ************************************ 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 bdev_null0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:25.788 [2024-11-20 10:12:59.303548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.788 { 00:34:25.788 "params": { 00:34:25.788 "name": "Nvme$subsystem", 00:34:25.788 "trtype": "$TEST_TRANSPORT", 00:34:25.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.788 "adrfam": "ipv4", 00:34:25.788 "trsvcid": "$NVMF_PORT", 00:34:25.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.788 "hdgst": ${hdgst:-false}, 00:34:25.788 "ddgst": ${ddgst:-false} 00:34:25.788 }, 00:34:25.788 "method": "bdev_nvme_attach_controller" 00:34:25.788 } 00:34:25.788 EOF 00:34:25.788 )") 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:25.788 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.789 "params": { 00:34:25.789 "name": "Nvme0", 00:34:25.789 "trtype": "tcp", 00:34:25.789 "traddr": "10.0.0.2", 00:34:25.789 "adrfam": "ipv4", 00:34:25.789 "trsvcid": "4420", 00:34:25.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:25.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:25.789 "hdgst": true, 00:34:25.789 "ddgst": true 00:34:25.789 }, 00:34:25.789 "method": "bdev_nvme_attach_controller" 00:34:25.789 }' 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:25.789 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:26.068 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:26.068 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:26.068 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:26.068 10:12:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.333 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:26.333 ... 00:34:26.333 fio-3.35 00:34:26.333 Starting 3 threads 00:34:38.537 00:34:38.537 filename0: (groupid=0, jobs=1): err= 0: pid=2919516: Wed Nov 20 10:13:10 2024 00:34:38.537 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(379MiB/10046msec) 00:34:38.537 slat (nsec): min=6213, max=28389, avg=11072.06, stdev=1725.57 00:34:38.537 clat (usec): min=7387, max=50837, avg=9903.16, stdev=1236.47 00:34:38.537 lat (usec): min=7398, max=50848, avg=9914.23, stdev=1236.42 00:34:38.537 clat percentiles (usec): 00:34:38.537 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:34:38.537 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:34:38.537 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:38.537 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12256], 99.95th=[49021], 00:34:38.537 | 99.99th=[50594] 00:34:38.537 bw ( KiB/s): min=36608, max=39424, per=35.50%, avg=38822.40, stdev=639.46, samples=20 00:34:38.537 iops : min= 286, max= 308, avg=303.30, stdev= 5.00, samples=20 00:34:38.537 lat (msec) : 10=57.07%, 20=42.87%, 50=0.03%, 100=0.03% 00:34:38.537 cpu : usr=94.59%, sys=5.11%, ctx=18, majf=0, minf=80 00:34:38.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 issued rwts: total=3035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.537 filename0: (groupid=0, jobs=1): err= 0: pid=2919517: Wed Nov 20 10:13:10 2024 00:34:38.537 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10044msec) 00:34:38.537 slat (nsec): min=6273, max=25863, avg=10908.31, stdev=1717.31 00:34:38.537 clat (usec): min=7547, max=47135, avg=10706.86, stdev=1185.14 00:34:38.537 lat (usec): min=7560, max=47145, avg=10717.77, stdev=1185.14 00:34:38.537 clat percentiles (usec): 00:34:38.537 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:34:38.537 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:34:38.537 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:34:38.537 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13698], 99.95th=[43779], 00:34:38.537 | 99.99th=[46924] 00:34:38.537 bw ( KiB/s): min=34816, max=36608, per=32.83%, avg=35904.00, stdev=461.51, samples=20 00:34:38.537 iops : min= 272, max= 286, avg=280.50, stdev= 3.61, samples=20 00:34:38.537 lat (msec) : 10=16.64%, 20=83.29%, 50=0.07% 00:34:38.537 cpu : usr=94.67%, sys=5.04%, ctx=17, majf=0, minf=48 00:34:38.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.537 filename0: (groupid=0, jobs=1): err= 0: pid=2919518: Wed Nov 20 10:13:10 2024 00:34:38.537 read: IOPS=272, BW=34.1MiB/s (35.8MB/s)(343MiB/10043msec) 00:34:38.537 slat (nsec): min=6261, max=27335, avg=10991.61, stdev=1491.80 00:34:38.537 clat (usec): min=8378, max=51136, avg=10967.99, stdev=1241.46 00:34:38.537 lat (usec): min=8390, max=51146, avg=10978.98, stdev=1241.47 00:34:38.537 clat percentiles (usec): 00:34:38.537 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:38.537 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:34:38.537 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:38.537 | 99.00th=[12780], 99.50th=[12911], 99.90th=[14222], 99.95th=[46400], 00:34:38.537 | 99.99th=[51119] 00:34:38.537 bw ( KiB/s): min=34560, max=35840, per=32.05%, avg=35046.40, stdev=388.69, samples=20 00:34:38.537 iops : min= 270, max= 280, avg=273.80, stdev= 3.04, samples=20 00:34:38.537 lat (msec) : 10=8.28%, 20=91.64%, 50=0.04%, 100=0.04% 00:34:38.537 cpu : usr=94.35%, sys=5.35%, ctx=17, majf=0, minf=21 00:34:38.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:38.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:38.537 issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:38.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:38.537 00:34:38.537 Run status group 0 (all jobs): 00:34:38.537 READ: bw=107MiB/s (112MB/s), 34.1MiB/s-37.8MiB/s (35.8MB/s-39.6MB/s), io=1073MiB (1125MB), run=10043-10046msec 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.537 00:34:38.537 real 0m11.087s 00:34:38.537 user 0m35.193s 00:34:38.537 sys 0m1.856s 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:38.537 10:13:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:38.537 ************************************ 00:34:38.537 END TEST fio_dif_digest 00:34:38.537 ************************************ 00:34:38.537 10:13:10 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:38.537 10:13:10 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.537 rmmod nvme_tcp 00:34:38.537 rmmod nvme_fabrics 00:34:38.537 rmmod nvme_keyring 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2910912 ']' 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2910912 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2910912 ']' 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2910912 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910912 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910912' 00:34:38.537 killing process with pid 2910912 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2910912 00:34:38.537 10:13:10 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2910912 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:38.537 10:13:10 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:39.914 Waiting for block devices as requested 00:34:39.914 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:40.172 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:40.172 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.172 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.431 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.431 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.431 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:40.431 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:40.690 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:40.690 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:40.690 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:40.949 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:40.949 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:40.949 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:40.949 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:41.209 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:41.209 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.209 10:13:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.209 10:13:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.209 10:13:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.743 10:13:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.743 00:34:43.743 real 1m14.769s 00:34:43.743 user 7m11.642s 00:34:43.743 sys 0m20.373s 00:34:43.743 10:13:16 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.743 10:13:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:43.743 ************************************ 00:34:43.743 END TEST nvmf_dif 00:34:43.743 ************************************ 00:34:43.743 10:13:16 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:43.743 10:13:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:43.743 10:13:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.743 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.743 ************************************ 00:34:43.743 START TEST nvmf_abort_qd_sizes 00:34:43.743 ************************************ 00:34:43.743 10:13:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:43.743 * Looking for test storage... 00:34:43.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.743 --rc genhtml_branch_coverage=1 00:34:43.743 --rc genhtml_function_coverage=1 00:34:43.743 --rc genhtml_legend=1 00:34:43.743 --rc geninfo_all_blocks=1 00:34:43.743 --rc geninfo_unexecuted_blocks=1 00:34:43.743 00:34:43.743 ' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.743 --rc genhtml_branch_coverage=1 00:34:43.743 --rc genhtml_function_coverage=1 00:34:43.743 --rc genhtml_legend=1 00:34:43.743 --rc geninfo_all_blocks=1 00:34:43.743 --rc geninfo_unexecuted_blocks=1 00:34:43.743 00:34:43.743 ' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.743 --rc genhtml_branch_coverage=1 00:34:43.743 --rc genhtml_function_coverage=1 00:34:43.743 --rc genhtml_legend=1 00:34:43.743 --rc geninfo_all_blocks=1 00:34:43.743 --rc geninfo_unexecuted_blocks=1 00:34:43.743 00:34:43.743 ' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.743 --rc genhtml_branch_coverage=1 00:34:43.743 --rc genhtml_function_coverage=1 00:34:43.743 --rc genhtml_legend=1 00:34:43.743 --rc geninfo_all_blocks=1 00:34:43.743 --rc geninfo_unexecuted_blocks=1 00:34:43.743 00:34:43.743 ' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.743 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.744 10:13:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:50.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:50.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:50.317 Found net devices under 0000:86:00.0: cvl_0_0 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.317 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:50.318 Found net devices under 0000:86:00.1: cvl_0_1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.318 10:13:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:34:50.318 00:34:50.318 --- 10.0.0.2 ping statistics --- 00:34:50.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.318 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:34:50.318 00:34:50.318 --- 10.0.0.1 ping statistics --- 00:34:50.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.318 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:50.318 10:13:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:52.224 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.224 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.224 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:52.483 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:53.864 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2927542 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2927542 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2927542 ']' 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.864 10:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:54.123 [2024-11-20 10:13:27.470900] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:34:54.123 [2024-11-20 10:13:27.470942] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.123 [2024-11-20 10:13:27.548595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:54.123 [2024-11-20 10:13:27.591936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.123 [2024-11-20 10:13:27.591976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.123 [2024-11-20 10:13:27.591983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.123 [2024-11-20 10:13:27.591988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.123 [2024-11-20 10:13:27.591993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.123 [2024-11-20 10:13:27.593583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.123 [2024-11-20 10:13:27.593690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:54.123 [2024-11-20 10:13:27.593712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:54.123 [2024-11-20 10:13:27.593713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.056 10:13:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.056 ************************************ 00:34:55.056 START TEST spdk_target_abort 00:34:55.056 ************************************ 00:34:55.056 10:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:55.056 10:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:55.056 10:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:55.056 10:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.056 10:13:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.339 spdk_targetn1 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.339 [2024-11-20 10:13:31.221404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.339 [2024-11-20 10:13:31.269420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:58.339 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:58.340 10:13:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:01.622 Initializing NVMe Controllers 00:35:01.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:01.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:01.623 Initialization complete. Launching workers. 00:35:01.623 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15921, failed: 0 00:35:01.623 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 14678 00:35:01.623 success 727, unsuccessful 516, failed 0 00:35:01.623 10:13:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:01.623 10:13:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:04.905 Initializing NVMe Controllers 00:35:04.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:04.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:04.905 Initialization complete. Launching workers. 00:35:04.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8405, failed: 0 00:35:04.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7187 00:35:04.905 success 337, unsuccessful 881, failed 0 00:35:04.905 10:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:04.905 10:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.186 Initializing NVMe Controllers 00:35:08.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.186 Initialization complete. Launching workers. 00:35:08.186 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38332, failed: 0 00:35:08.186 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2877, failed to submit 35455 00:35:08.186 success 634, unsuccessful 2243, failed 0 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:08.186 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.187 10:13:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2927542 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2927542 ']' 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2927542 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927542 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927542' 00:35:09.559 killing process with pid 2927542 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2927542 00:35:09.559 10:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2927542 00:35:09.559 00:35:09.559 real 0m14.728s 00:35:09.559 user 0m58.657s 00:35:09.559 sys 0m2.626s 00:35:09.559 10:13:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.559 10:13:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.559 ************************************ 00:35:09.559 END TEST spdk_target_abort 00:35:09.559 ************************************ 00:35:09.818 10:13:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:09.818 10:13:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:09.818 10:13:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.818 10:13:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:09.818 ************************************ 00:35:09.818 START TEST kernel_target_abort 00:35:09.818 ************************************ 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:09.818 10:13:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:12.353 Waiting for block devices as requested 00:35:12.611 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:12.611 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:12.611 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:12.869 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:12.869 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:12.869 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.647 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.647 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.647 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.906 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:13.906 No valid GPT data, bailing 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:13.906 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:14.163 00:35:14.163 Discovery Log Number of Records 2, Generation counter 2 00:35:14.163 =====Discovery Log Entry 0====== 00:35:14.163 trtype: tcp 00:35:14.163 adrfam: ipv4 00:35:14.163 subtype: current discovery subsystem 00:35:14.163 treq: not specified, sq flow control disable supported 00:35:14.163 portid: 1 00:35:14.163 trsvcid: 4420 00:35:14.163 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:14.163 traddr: 10.0.0.1 00:35:14.163 eflags: none 00:35:14.163 sectype: none 00:35:14.163 =====Discovery Log Entry 1====== 00:35:14.163 trtype: tcp 00:35:14.163 adrfam: ipv4 00:35:14.163 subtype: nvme subsystem 00:35:14.163 treq: not specified, sq flow control disable supported 00:35:14.163 portid: 1 00:35:14.163 trsvcid: 4420 00:35:14.163 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:14.163 traddr: 10.0.0.1 00:35:14.163 eflags: none 00:35:14.163 sectype: none 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:14.163 10:13:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:17.581 Initializing NVMe Controllers 00:35:17.581 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:17.581 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:17.581 Initialization complete. Launching workers. 00:35:17.581 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95187, failed: 0 00:35:17.581 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95187, failed to submit 0 00:35:17.581 success 0, unsuccessful 95187, failed 0 00:35:17.581 10:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:17.581 10:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.877 Initializing NVMe Controllers 00:35:20.877 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:20.877 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:20.877 Initialization complete. Launching workers. 00:35:20.877 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151400, failed: 0 00:35:20.877 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38218, failed to submit 113182 00:35:20.877 success 0, unsuccessful 38218, failed 0 00:35:20.877 10:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.877 10:13:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.411 Initializing NVMe Controllers 00:35:23.411 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:23.411 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:23.411 Initialization complete. Launching workers. 00:35:23.411 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142663, failed: 0 00:35:23.411 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35718, failed to submit 106945 00:35:23.411 success 0, unsuccessful 35718, failed 0 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:23.411 10:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:26.699 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.699 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:28.080 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:28.080 00:35:28.080 real 0m18.169s 00:35:28.080 user 0m9.091s 00:35:28.080 sys 0m5.164s 00:35:28.080 10:14:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.080 10:14:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:28.080 ************************************ 00:35:28.080 END TEST kernel_target_abort 00:35:28.080 ************************************ 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.080 rmmod nvme_tcp 00:35:28.080 rmmod nvme_fabrics 00:35:28.080 rmmod nvme_keyring 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2927542 ']' 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2927542 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2927542 ']' 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2927542 00:35:28.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2927542) - No such process 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2927542 is not found' 00:35:28.080 Process with pid 2927542 is not found 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:28.080 10:14:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:30.616 Waiting for block devices as requested 00:35:30.616 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:30.876 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:30.876 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.135 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.135 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.135 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.395 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.395 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.395 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.395 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.655 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.655 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.655 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.914 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.914 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.914 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.914 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.173 10:14:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.079 10:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.079 00:35:34.079 real 0m50.721s 00:35:34.079 user 1m12.332s 00:35:34.079 sys 0m16.526s 00:35:34.079 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.079 10:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.079 ************************************ 00:35:34.079 END TEST nvmf_abort_qd_sizes 00:35:34.079 ************************************ 00:35:34.337 10:14:07 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.337 10:14:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.337 10:14:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.337 10:14:07 -- common/autotest_common.sh@10 -- # set +x 00:35:34.337 ************************************ 00:35:34.337 START TEST keyring_file 00:35:34.337 ************************************ 00:35:34.337 10:14:07 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.337 * Looking for test storage... 00:35:34.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:34.337 10:14:07 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.337 10:14:07 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.337 10:14:07 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.337 10:14:07 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:34.337 10:14:07 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:34.338 10:14:07 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.338 10:14:07 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.338 --rc genhtml_branch_coverage=1 00:35:34.338 --rc genhtml_function_coverage=1 00:35:34.338 --rc genhtml_legend=1 00:35:34.338 --rc geninfo_all_blocks=1 00:35:34.338 --rc geninfo_unexecuted_blocks=1 00:35:34.338 00:35:34.338 ' 00:35:34.338 10:14:07 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.338 --rc genhtml_branch_coverage=1 00:35:34.338 --rc genhtml_function_coverage=1 00:35:34.338 --rc genhtml_legend=1 00:35:34.338 --rc geninfo_all_blocks=1 00:35:34.338 --rc geninfo_unexecuted_blocks=1 00:35:34.338 00:35:34.338 ' 00:35:34.338 10:14:07 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.338 --rc genhtml_branch_coverage=1 00:35:34.338 --rc genhtml_function_coverage=1 00:35:34.338 --rc genhtml_legend=1 00:35:34.338 --rc geninfo_all_blocks=1 00:35:34.338 --rc geninfo_unexecuted_blocks=1 00:35:34.338 00:35:34.338 ' 00:35:34.338 10:14:07 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.338 --rc genhtml_branch_coverage=1 00:35:34.338 --rc genhtml_function_coverage=1 00:35:34.338 --rc genhtml_legend=1 00:35:34.338 --rc geninfo_all_blocks=1 00:35:34.338 --rc geninfo_unexecuted_blocks=1 00:35:34.338 00:35:34.338 ' 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.338 10:14:07 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.338 10:14:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.338 10:14:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.338 10:14:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.338 10:14:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:34.338 10:14:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:34.338 10:14:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3pbTbznw3u 00:35:34.338 10:14:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.338 10:14:07 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3pbTbznw3u 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3pbTbznw3u 00:35:34.597 10:14:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3pbTbznw3u 00:35:34.597 10:14:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BMAqEvfKPB 00:35:34.597 10:14:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.597 10:14:07 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.597 10:14:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BMAqEvfKPB 00:35:34.597 10:14:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BMAqEvfKPB 00:35:34.597 10:14:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BMAqEvfKPB 00:35:34.597 10:14:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=2936576 00:35:34.597 10:14:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2936576 00:35:34.597 10:14:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2936576 ']' 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.597 10:14:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:34.597 [2024-11-20 10:14:08.056526] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:35:34.597 [2024-11-20 10:14:08.056575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936576 ] 00:35:34.597 [2024-11-20 10:14:08.131495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.597 [2024-11-20 10:14:08.171267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.855 10:14:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.855 10:14:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:34.855 10:14:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:34.855 10:14:08 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.855 10:14:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:34.855 [2024-11-20 10:14:08.395456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.855 null0 00:35:34.855 [2024-11-20 10:14:08.427511] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:34.855 [2024-11-20 10:14:08.427871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.113 10:14:08 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:35.113 10:14:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.114 [2024-11-20 10:14:08.459588] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:35.114 request: 00:35:35.114 { 00:35:35.114 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.114 "secure_channel": false, 00:35:35.114 "listen_address": { 00:35:35.114 "trtype": "tcp", 00:35:35.114 "traddr": "127.0.0.1", 00:35:35.114 "trsvcid": "4420" 00:35:35.114 }, 00:35:35.114 "method": "nvmf_subsystem_add_listener", 00:35:35.114 "req_id": 1 00:35:35.114 } 00:35:35.114 Got JSON-RPC error response 00:35:35.114 response: 00:35:35.114 { 00:35:35.114 "code": -32602, 00:35:35.114 "message": "Invalid parameters" 00:35:35.114 } 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:35.114 10:14:08 keyring_file -- keyring/file.sh@47 -- # bperfpid=2936584 00:35:35.114 10:14:08 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2936584 /var/tmp/bperf.sock 00:35:35.114 10:14:08 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2936584 ']' 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.114 10:14:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.114 [2024-11-20 10:14:08.515103] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:35:35.114 [2024-11-20 10:14:08.515143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936584 ] 00:35:35.114 [2024-11-20 10:14:08.589344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.114 [2024-11-20 10:14:08.631743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.372 10:14:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.372 10:14:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:35.372 10:14:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:35.372 10:14:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:35.372 10:14:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BMAqEvfKPB 00:35:35.372 10:14:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BMAqEvfKPB 00:35:35.631 10:14:09 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:35.631 10:14:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:35.631 10:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.631 10:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:35.631 10:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.889 10:14:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3pbTbznw3u == \/\t\m\p\/\t\m\p\.\3\p\b\T\b\z\n\w\3\u ]] 00:35:35.889 10:14:09 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:35.889 10:14:09 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:35.889 10:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.889 10:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:35.889 10:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.148 10:14:09 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BMAqEvfKPB == \/\t\m\p\/\t\m\p\.\B\M\A\q\E\v\f\K\P\B ]] 00:35:36.148 10:14:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.148 10:14:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:36.148 10:14:09 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.148 10:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.406 10:14:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:36.406 10:14:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.406 10:14:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.665 [2024-11-20 10:14:10.065742] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:36.665 nvme0n1 00:35:36.665 10:14:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:36.665 10:14:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.665 10:14:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.665 10:14:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.665 10:14:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.665 10:14:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.924 10:14:10 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:36.924 10:14:10 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:36.924 10:14:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.924 10:14:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.924 10:14:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.924 10:14:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.924 10:14:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.182 10:14:10 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:37.182 10:14:10 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.182 Running I/O for 1 seconds... 00:35:38.120 19380.00 IOPS, 75.70 MiB/s 00:35:38.120 Latency(us) 00:35:38.120 [2024-11-20T09:14:11.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.120 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:38.120 nvme0n1 : 1.00 19424.00 75.88 0.00 0.00 6577.29 2777.48 12919.95 00:35:38.120 [2024-11-20T09:14:11.702Z] =================================================================================================================== 00:35:38.120 [2024-11-20T09:14:11.702Z] Total : 19424.00 75.88 0.00 0.00 6577.29 2777.48 12919.95 00:35:38.120 { 00:35:38.120 "results": [ 00:35:38.120 { 00:35:38.120 "job": "nvme0n1", 00:35:38.120 "core_mask": "0x2", 00:35:38.120 "workload": "randrw", 00:35:38.120 "percentage": 50, 00:35:38.120 "status": "finished", 00:35:38.121 "queue_depth": 128, 00:35:38.121 "io_size": 4096, 00:35:38.121 "runtime": 1.004376, 00:35:38.121 "iops": 19424.000573490404, 00:35:38.121 "mibps": 75.87500224019689, 00:35:38.121 "io_failed": 0, 00:35:38.121 "io_timeout": 0, 00:35:38.121 "avg_latency_us": 6577.288852763926, 00:35:38.121 "min_latency_us": 2777.478095238095, 00:35:38.121 "max_latency_us": 12919.954285714286 00:35:38.121 } 00:35:38.121 ], 00:35:38.121 "core_count": 1 00:35:38.121 } 00:35:38.121 10:14:11 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:38.121 10:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:38.379 10:14:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:38.379 10:14:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.379 10:14:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.379 10:14:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.379 10:14:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.379 10:14:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.638 10:14:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:38.638 10:14:12 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:38.638 10:14:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:38.638 10:14:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.638 10:14:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.638 10:14:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.638 10:14:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:38.898 10:14:12 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:38.898 10:14:12 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:38.898 [2024-11-20 10:14:12.440531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:38.898 [2024-11-20 10:14:12.441249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026d00 (107): Transport endpoint is not connected 00:35:38.898 [2024-11-20 10:14:12.442244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1026d00 (9): Bad file descriptor 00:35:38.898 [2024-11-20 10:14:12.443245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:38.898 [2024-11-20 10:14:12.443256] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:38.898 [2024-11-20 10:14:12.443263] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:38.898 [2024-11-20 10:14:12.443272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:38.898 request: 00:35:38.898 { 00:35:38.898 "name": "nvme0", 00:35:38.898 "trtype": "tcp", 00:35:38.898 "traddr": "127.0.0.1", 00:35:38.898 "adrfam": "ipv4", 00:35:38.898 "trsvcid": "4420", 00:35:38.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.898 "prchk_reftag": false, 00:35:38.898 "prchk_guard": false, 00:35:38.898 "hdgst": false, 00:35:38.898 "ddgst": false, 00:35:38.898 "psk": "key1", 00:35:38.898 "allow_unrecognized_csi": false, 00:35:38.898 "method": "bdev_nvme_attach_controller", 00:35:38.898 "req_id": 1 00:35:38.898 } 00:35:38.898 Got JSON-RPC error response 00:35:38.898 response: 00:35:38.898 { 00:35:38.898 "code": -5, 00:35:38.898 "message": "Input/output error" 00:35:38.898 } 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:38.898 10:14:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:38.898 10:14:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.898 10:14:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.157 10:14:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:39.157 10:14:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:39.157 10:14:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.157 10:14:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.157 10:14:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.157 10:14:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:39.157 10:14:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.416 10:14:12 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:39.416 10:14:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.416 10:14:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.675 10:14:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:39.675 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:39.675 10:14:13 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:39.675 10:14:13 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:39.675 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.935 10:14:13 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:39.935 10:14:13 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3pbTbznw3u 00:35:39.935 10:14:13 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.935 10:14:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:39.935 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:40.194 [2024-11-20 10:14:13.548163] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3pbTbznw3u': 0100660 00:35:40.194 [2024-11-20 10:14:13.548188] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:40.194 request: 00:35:40.194 { 00:35:40.194 "name": "key0", 00:35:40.194 "path": "/tmp/tmp.3pbTbznw3u", 00:35:40.194 "method": "keyring_file_add_key", 00:35:40.194 "req_id": 1 00:35:40.194 } 00:35:40.194 Got JSON-RPC error response 00:35:40.194 response: 00:35:40.194 { 00:35:40.194 "code": -1, 00:35:40.194 "message": "Operation not permitted" 00:35:40.194 } 00:35:40.194 10:14:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.194 10:14:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.194 10:14:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.194 10:14:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.194 10:14:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3pbTbznw3u 00:35:40.194 10:14:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3pbTbznw3u 00:35:40.194 10:14:13 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3pbTbznw3u 00:35:40.194 10:14:13 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.194 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.453 10:14:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:40.453 10:14:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.453 10:14:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.453 10:14:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.712 [2024-11-20 10:14:14.129698] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3pbTbznw3u': No such file or directory 00:35:40.712 [2024-11-20 10:14:14.129721] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:40.712 [2024-11-20 10:14:14.129736] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:40.712 [2024-11-20 10:14:14.129743] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:40.712 [2024-11-20 10:14:14.129750] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:40.712 [2024-11-20 10:14:14.129756] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:40.712 request: 00:35:40.712 { 00:35:40.712 "name": "nvme0", 00:35:40.712 "trtype": "tcp", 00:35:40.712 "traddr": "127.0.0.1", 00:35:40.712 "adrfam": "ipv4", 00:35:40.712 "trsvcid": "4420", 00:35:40.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.712 "prchk_reftag": false, 00:35:40.712 "prchk_guard": false, 00:35:40.712 "hdgst": false, 00:35:40.712 "ddgst": false, 00:35:40.712 "psk": "key0", 00:35:40.712 "allow_unrecognized_csi": false, 00:35:40.712 "method": "bdev_nvme_attach_controller", 00:35:40.712 "req_id": 1 00:35:40.712 } 00:35:40.712 Got JSON-RPC error response 00:35:40.712 response: 00:35:40.712 { 00:35:40.712 "code": -19, 00:35:40.712 "message": "No such device" 00:35:40.712 } 00:35:40.712 10:14:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.712 10:14:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.712 10:14:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.712 10:14:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.712 10:14:14 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:40.712 10:14:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:40.971 10:14:14 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MVss6GDGHj 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:40.971 10:14:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MVss6GDGHj 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MVss6GDGHj 00:35:40.971 10:14:14 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.MVss6GDGHj 00:35:40.971 10:14:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MVss6GDGHj 00:35:40.971 10:14:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MVss6GDGHj 00:35:41.230 10:14:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.230 10:14:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.488 nvme0n1 00:35:41.488 10:14:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:41.488 10:14:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.488 10:14:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.488 10:14:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.488 10:14:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.488 10:14:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.747 10:14:15 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:41.747 10:14:15 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:41.747 10:14:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:41.747 10:14:15 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:41.747 10:14:15 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:41.747 10:14:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.747 10:14:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.747 10:14:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.006 10:14:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:42.006 10:14:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:42.006 10:14:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.006 10:14:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.006 10:14:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.006 10:14:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.006 10:14:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.266 10:14:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:42.266 10:14:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:42.266 10:14:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:42.526 10:14:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:42.526 10:14:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:42.526 10:14:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.526 10:14:16 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:42.526 10:14:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MVss6GDGHj 00:35:42.526 10:14:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MVss6GDGHj 00:35:42.785 10:14:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BMAqEvfKPB 00:35:42.785 10:14:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BMAqEvfKPB 00:35:43.044 10:14:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.044 10:14:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.303 nvme0n1 00:35:43.303 10:14:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:43.303 10:14:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:43.563 10:14:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:43.563 "subsystems": [ 00:35:43.563 { 00:35:43.563 "subsystem": "keyring", 00:35:43.563 "config": [ 00:35:43.563 { 00:35:43.563 "method": "keyring_file_add_key", 00:35:43.563 "params": { 00:35:43.563 "name": "key0", 00:35:43.563 "path": "/tmp/tmp.MVss6GDGHj" 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "keyring_file_add_key", 00:35:43.563 "params": { 00:35:43.563 "name": "key1", 00:35:43.563 "path": "/tmp/tmp.BMAqEvfKPB" 00:35:43.563 } 00:35:43.563 } 00:35:43.563 ] 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "subsystem": "iobuf", 00:35:43.563 "config": [ 00:35:43.563 { 00:35:43.563 "method": "iobuf_set_options", 00:35:43.563 "params": { 00:35:43.563 "small_pool_count": 8192, 00:35:43.563 "large_pool_count": 1024, 00:35:43.563 "small_bufsize": 8192, 00:35:43.563 "large_bufsize": 135168, 00:35:43.563 "enable_numa": false 00:35:43.563 } 00:35:43.563 } 00:35:43.563 ] 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "subsystem": "sock", 00:35:43.563 "config": [ 00:35:43.563 { 00:35:43.563 "method": "sock_set_default_impl", 00:35:43.563 "params": { 00:35:43.563 "impl_name": "posix" 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "sock_impl_set_options", 00:35:43.563 "params": { 00:35:43.563 "impl_name": "ssl", 00:35:43.563 "recv_buf_size": 4096, 00:35:43.563 "send_buf_size": 4096, 00:35:43.563 "enable_recv_pipe": true, 00:35:43.563 "enable_quickack": false, 00:35:43.563 "enable_placement_id": 0, 00:35:43.563 "enable_zerocopy_send_server": true, 00:35:43.563 "enable_zerocopy_send_client": false, 00:35:43.563 "zerocopy_threshold": 0, 00:35:43.563 "tls_version": 0, 00:35:43.563 "enable_ktls": false 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "sock_impl_set_options", 00:35:43.563 "params": { 00:35:43.563 "impl_name": "posix", 00:35:43.563 "recv_buf_size": 2097152, 00:35:43.563 "send_buf_size": 2097152, 00:35:43.563 "enable_recv_pipe": true, 00:35:43.563 "enable_quickack": false, 00:35:43.563 "enable_placement_id": 0, 00:35:43.563 "enable_zerocopy_send_server": true, 00:35:43.563 "enable_zerocopy_send_client": false, 00:35:43.563 "zerocopy_threshold": 0, 00:35:43.563 "tls_version": 0, 00:35:43.563 "enable_ktls": false 00:35:43.563 } 00:35:43.563 } 00:35:43.563 ] 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "subsystem": "vmd", 00:35:43.563 "config": [] 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "subsystem": "accel", 00:35:43.563 "config": [ 00:35:43.563 { 00:35:43.563 "method": "accel_set_options", 00:35:43.563 "params": { 00:35:43.563 "small_cache_size": 128, 00:35:43.563 "large_cache_size": 16, 00:35:43.563 "task_count": 2048, 00:35:43.563 "sequence_count": 2048, 00:35:43.563 "buf_count": 2048 00:35:43.563 } 00:35:43.563 } 00:35:43.563 ] 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "subsystem": "bdev", 00:35:43.563 "config": [ 00:35:43.563 { 00:35:43.563 "method": "bdev_set_options", 00:35:43.563 "params": { 00:35:43.563 "bdev_io_pool_size": 65535, 00:35:43.563 "bdev_io_cache_size": 256, 00:35:43.563 "bdev_auto_examine": true, 00:35:43.563 "iobuf_small_cache_size": 128, 00:35:43.563 "iobuf_large_cache_size": 16 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "bdev_raid_set_options", 00:35:43.563 "params": { 00:35:43.563 "process_window_size_kb": 1024, 00:35:43.563 "process_max_bandwidth_mb_sec": 0 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "bdev_iscsi_set_options", 00:35:43.563 "params": { 00:35:43.563 "timeout_sec": 30 00:35:43.563 } 00:35:43.563 }, 00:35:43.563 { 00:35:43.563 "method": "bdev_nvme_set_options", 00:35:43.563 "params": { 00:35:43.563 "action_on_timeout": "none", 00:35:43.563 "timeout_us": 0, 00:35:43.563 "timeout_admin_us": 0, 00:35:43.563 "keep_alive_timeout_ms": 10000, 00:35:43.563 "arbitration_burst": 0, 00:35:43.563 "low_priority_weight": 0, 00:35:43.563 "medium_priority_weight": 0, 00:35:43.563 "high_priority_weight": 0, 00:35:43.563 "nvme_adminq_poll_period_us": 10000, 00:35:43.563 "nvme_ioq_poll_period_us": 0, 00:35:43.563 "io_queue_requests": 512, 00:35:43.563 "delay_cmd_submit": true, 00:35:43.563 "transport_retry_count": 4, 00:35:43.563 "bdev_retry_count": 3, 00:35:43.563 "transport_ack_timeout": 0, 00:35:43.563 "ctrlr_loss_timeout_sec": 0, 00:35:43.563 "reconnect_delay_sec": 0, 00:35:43.563 "fast_io_fail_timeout_sec": 0, 00:35:43.563 "disable_auto_failback": false, 00:35:43.563 "generate_uuids": false, 00:35:43.563 "transport_tos": 0, 00:35:43.563 "nvme_error_stat": false, 00:35:43.563 "rdma_srq_size": 0, 00:35:43.563 "io_path_stat": false, 00:35:43.563 "allow_accel_sequence": false, 00:35:43.563 "rdma_max_cq_size": 0, 00:35:43.564 "rdma_cm_event_timeout_ms": 0, 00:35:43.564 "dhchap_digests": [ 00:35:43.564 "sha256", 00:35:43.564 "sha384", 00:35:43.564 "sha512" 00:35:43.564 ], 00:35:43.564 "dhchap_dhgroups": [ 00:35:43.564 "null", 00:35:43.564 "ffdhe2048", 00:35:43.564 "ffdhe3072", 00:35:43.564 "ffdhe4096", 00:35:43.564 "ffdhe6144", 00:35:43.564 "ffdhe8192" 00:35:43.564 ] 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_nvme_attach_controller", 00:35:43.564 "params": { 00:35:43.564 "name": "nvme0", 00:35:43.564 "trtype": "TCP", 00:35:43.564 "adrfam": "IPv4", 00:35:43.564 "traddr": "127.0.0.1", 00:35:43.564 "trsvcid": "4420", 00:35:43.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.564 "prchk_reftag": false, 00:35:43.564 "prchk_guard": false, 00:35:43.564 "ctrlr_loss_timeout_sec": 0, 00:35:43.564 "reconnect_delay_sec": 0, 00:35:43.564 "fast_io_fail_timeout_sec": 0, 00:35:43.564 "psk": "key0", 00:35:43.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.564 "hdgst": false, 00:35:43.564 "ddgst": false, 00:35:43.564 "multipath": "multipath" 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_nvme_set_hotplug", 00:35:43.564 "params": { 00:35:43.564 "period_us": 100000, 00:35:43.564 "enable": false 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_wait_for_examine" 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "nbd", 00:35:43.564 "config": [] 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }' 00:35:43.564 10:14:16 keyring_file -- keyring/file.sh@115 -- # killprocess 2936584 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2936584 ']' 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2936584 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936584 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936584' 00:35:43.564 killing process with pid 2936584 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@973 -- # kill 2936584 00:35:43.564 Received shutdown signal, test time was about 1.000000 seconds 00:35:43.564 00:35:43.564 Latency(us) 00:35:43.564 [2024-11-20T09:14:17.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.564 [2024-11-20T09:14:17.146Z] =================================================================================================================== 00:35:43.564 [2024-11-20T09:14:17.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:43.564 10:14:16 keyring_file -- common/autotest_common.sh@978 -- # wait 2936584 00:35:43.564 10:14:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=2938097 00:35:43.564 10:14:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2938097 /var/tmp/bperf.sock 00:35:43.564 10:14:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2938097 ']' 00:35:43.564 10:14:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.564 10:14:17 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:43.564 10:14:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.564 10:14:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.564 10:14:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:43.564 "subsystems": [ 00:35:43.564 { 00:35:43.564 "subsystem": "keyring", 00:35:43.564 "config": [ 00:35:43.564 { 00:35:43.564 "method": "keyring_file_add_key", 00:35:43.564 "params": { 00:35:43.564 "name": "key0", 00:35:43.564 "path": "/tmp/tmp.MVss6GDGHj" 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "keyring_file_add_key", 00:35:43.564 "params": { 00:35:43.564 "name": "key1", 00:35:43.564 "path": "/tmp/tmp.BMAqEvfKPB" 00:35:43.564 } 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "iobuf", 00:35:43.564 "config": [ 00:35:43.564 { 00:35:43.564 "method": "iobuf_set_options", 00:35:43.564 "params": { 00:35:43.564 "small_pool_count": 8192, 00:35:43.564 "large_pool_count": 1024, 00:35:43.564 "small_bufsize": 8192, 00:35:43.564 "large_bufsize": 135168, 00:35:43.564 "enable_numa": false 00:35:43.564 } 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "sock", 00:35:43.564 "config": [ 00:35:43.564 { 00:35:43.564 "method": "sock_set_default_impl", 00:35:43.564 "params": { 00:35:43.564 "impl_name": "posix" 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "sock_impl_set_options", 00:35:43.564 "params": { 00:35:43.564 "impl_name": "ssl", 00:35:43.564 "recv_buf_size": 4096, 00:35:43.564 "send_buf_size": 4096, 00:35:43.564 "enable_recv_pipe": true, 00:35:43.564 "enable_quickack": false, 00:35:43.564 "enable_placement_id": 0, 00:35:43.564 "enable_zerocopy_send_server": true, 00:35:43.564 "enable_zerocopy_send_client": false, 00:35:43.564 "zerocopy_threshold": 0, 00:35:43.564 "tls_version": 0, 00:35:43.564 "enable_ktls": false 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "sock_impl_set_options", 00:35:43.564 "params": { 00:35:43.564 "impl_name": "posix", 00:35:43.564 "recv_buf_size": 2097152, 00:35:43.564 "send_buf_size": 2097152, 00:35:43.564 "enable_recv_pipe": true, 00:35:43.564 "enable_quickack": false, 00:35:43.564 "enable_placement_id": 0, 00:35:43.564 "enable_zerocopy_send_server": true, 00:35:43.564 "enable_zerocopy_send_client": false, 00:35:43.564 "zerocopy_threshold": 0, 00:35:43.564 "tls_version": 0, 00:35:43.564 "enable_ktls": false 00:35:43.564 } 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "vmd", 00:35:43.564 "config": [] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "accel", 00:35:43.564 "config": [ 00:35:43.564 { 00:35:43.564 "method": "accel_set_options", 00:35:43.564 "params": { 00:35:43.564 "small_cache_size": 128, 00:35:43.564 "large_cache_size": 16, 00:35:43.564 "task_count": 2048, 00:35:43.564 "sequence_count": 2048, 00:35:43.564 "buf_count": 2048 00:35:43.564 } 00:35:43.564 } 00:35:43.564 ] 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "subsystem": "bdev", 00:35:43.564 "config": [ 00:35:43.564 { 00:35:43.564 "method": "bdev_set_options", 00:35:43.564 "params": { 00:35:43.564 "bdev_io_pool_size": 65535, 00:35:43.564 "bdev_io_cache_size": 256, 00:35:43.564 "bdev_auto_examine": true, 00:35:43.564 "iobuf_small_cache_size": 128, 00:35:43.564 "iobuf_large_cache_size": 16 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_raid_set_options", 00:35:43.564 "params": { 00:35:43.564 "process_window_size_kb": 1024, 00:35:43.564 "process_max_bandwidth_mb_sec": 0 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_iscsi_set_options", 00:35:43.564 "params": { 00:35:43.564 "timeout_sec": 30 00:35:43.564 } 00:35:43.564 }, 00:35:43.564 { 00:35:43.564 "method": "bdev_nvme_set_options", 00:35:43.564 "params": { 00:35:43.564 "action_on_timeout": "none", 00:35:43.564 "timeout_us": 0, 00:35:43.564 "timeout_admin_us": 0, 00:35:43.564 "keep_alive_timeout_ms": 10000, 00:35:43.564 "arbitration_burst": 0, 00:35:43.564 "low_priority_weight": 0, 00:35:43.564 "medium_priority_weight": 0, 00:35:43.564 "high_priority_weight": 0, 00:35:43.564 "nvme_adminq_poll_period_us": 10000, 00:35:43.564 "nvme_ioq_poll_period_us": 0, 00:35:43.565 "io_queue_requests": 512, 00:35:43.565 "delay_cmd_submit": true, 00:35:43.565 "transport_retry_count": 4, 00:35:43.565 "bdev_retry_count": 3, 00:35:43.565 "transport_ack_timeout": 0, 00:35:43.565 "ctrlr_loss_timeout_sec": 0, 00:35:43.565 "reconnect_delay_sec": 0, 00:35:43.565 "fast_io_fail_timeout_sec": 0, 00:35:43.565 "disable_auto_failback": false, 00:35:43.565 "generate_uuids": false, 00:35:43.565 "transport_tos": 0, 00:35:43.565 "nvme_error_stat": false, 00:35:43.565 "rdma_srq_size": 0, 00:35:43.565 "io_path_stat": false, 00:35:43.565 "allow_accel_sequence": false, 00:35:43.565 "rdma_max_cq_size": 0, 00:35:43.565 "rdma_cm_event_timeout_ms": 0, 00:35:43.565 "dhchap_digests": [ 00:35:43.565 "sha256", 00:35:43.565 "sha384", 00:35:43.565 "sha512" 00:35:43.565 ], 00:35:43.565 "dhchap_dhgroups": [ 00:35:43.565 "null", 00:35:43.565 "ffdhe2048", 00:35:43.565 "ffdhe3072", 00:35:43.565 "ffdhe4096", 00:35:43.565 "ffdhe6144", 00:35:43.565 "ffdhe8192" 00:35:43.565 ] 00:35:43.565 } 00:35:43.565 }, 00:35:43.565 { 00:35:43.565 "method": "bdev_nvme_attach_controller", 00:35:43.565 "params": { 00:35:43.565 "name": "nvme0", 00:35:43.565 "trtype": "TCP", 00:35:43.565 "adrfam": "IPv4", 00:35:43.565 "traddr": "127.0.0.1", 00:35:43.565 "trsvcid": "4420", 00:35:43.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.565 "prchk_reftag": false, 00:35:43.565 "prchk_guard": false, 00:35:43.565 "ctrlr_loss_timeout_sec": 0, 00:35:43.565 "reconnect_delay_sec": 0, 00:35:43.565 "fast_io_fail_timeout_sec": 0, 00:35:43.565 "psk": "key0", 00:35:43.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.565 "hdgst": false, 00:35:43.565 "ddgst": false, 00:35:43.565 "multipath": "multipath" 00:35:43.565 } 00:35:43.565 }, 00:35:43.565 { 00:35:43.565 "method": "bdev_nvme_set_hotplug", 00:35:43.565 "params": { 00:35:43.565 "period_us": 100000, 00:35:43.565 "enable": false 00:35:43.565 } 00:35:43.565 }, 00:35:43.565 { 00:35:43.565 "method": "bdev_wait_for_examine" 00:35:43.565 } 00:35:43.565 ] 00:35:43.565 }, 00:35:43.565 { 00:35:43.565 "subsystem": "nbd", 00:35:43.565 "config": [] 00:35:43.565 } 00:35:43.565 ] 00:35:43.565 }' 00:35:43.565 10:14:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.565 10:14:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.824 [2024-11-20 10:14:17.167673] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:35:43.824 [2024-11-20 10:14:17.167719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938097 ] 00:35:43.824 [2024-11-20 10:14:17.241448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.824 [2024-11-20 10:14:17.282989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.083 [2024-11-20 10:14:17.442500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:44.651 10:14:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.651 10:14:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:44.651 10:14:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:44.651 10:14:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:44.651 10:14:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.651 10:14:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:44.651 10:14:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:44.651 10:14:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.651 10:14:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.651 10:14:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.651 10:14:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.651 10:14:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.908 10:14:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:44.908 10:14:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:44.908 10:14:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.908 10:14:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.908 10:14:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.908 10:14:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.908 10:14:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.167 10:14:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:45.167 10:14:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:45.167 10:14:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:45.167 10:14:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:45.426 10:14:18 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:45.426 10:14:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:45.426 10:14:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MVss6GDGHj /tmp/tmp.BMAqEvfKPB 00:35:45.426 10:14:18 keyring_file -- keyring/file.sh@20 -- # killprocess 2938097 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2938097 ']' 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2938097 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938097 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938097' 00:35:45.426 killing process with pid 2938097 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@973 -- # kill 2938097 00:35:45.426 Received shutdown signal, test time was about 1.000000 seconds 00:35:45.426 00:35:45.426 Latency(us) 00:35:45.426 [2024-11-20T09:14:19.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.426 [2024-11-20T09:14:19.008Z] =================================================================================================================== 00:35:45.426 [2024-11-20T09:14:19.008Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@978 -- # wait 2938097 00:35:45.426 10:14:18 keyring_file -- keyring/file.sh@21 -- # killprocess 2936576 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2936576 ']' 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2936576 00:35:45.426 10:14:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:45.426 10:14:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936576 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936576' 00:35:45.684 killing process with pid 2936576 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@973 -- # kill 2936576 00:35:45.684 10:14:19 keyring_file -- common/autotest_common.sh@978 -- # wait 2936576 00:35:45.945 00:35:45.945 real 0m11.636s 00:35:45.945 user 0m28.904s 00:35:45.945 sys 0m2.702s 00:35:45.945 10:14:19 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.945 10:14:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.945 ************************************ 00:35:45.945 END TEST keyring_file 00:35:45.945 ************************************ 00:35:45.945 10:14:19 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:45.945 10:14:19 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:45.945 10:14:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:45.945 10:14:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.945 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:35:45.945 ************************************ 00:35:45.945 START TEST keyring_linux 00:35:45.945 ************************************ 00:35:45.945 10:14:19 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:45.945 Joined session keyring: 792902961 00:35:45.945 * Looking for test storage... 00:35:45.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:45.945 10:14:19 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:45.945 10:14:19 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:45.945 10:14:19 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.205 --rc genhtml_branch_coverage=1 00:35:46.205 --rc genhtml_function_coverage=1 00:35:46.205 --rc genhtml_legend=1 00:35:46.205 --rc geninfo_all_blocks=1 00:35:46.205 --rc geninfo_unexecuted_blocks=1 00:35:46.205 00:35:46.205 ' 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.205 --rc genhtml_branch_coverage=1 00:35:46.205 --rc genhtml_function_coverage=1 00:35:46.205 --rc genhtml_legend=1 00:35:46.205 --rc geninfo_all_blocks=1 00:35:46.205 --rc geninfo_unexecuted_blocks=1 00:35:46.205 00:35:46.205 ' 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.205 --rc genhtml_branch_coverage=1 00:35:46.205 --rc genhtml_function_coverage=1 00:35:46.205 --rc genhtml_legend=1 00:35:46.205 --rc geninfo_all_blocks=1 00:35:46.205 --rc geninfo_unexecuted_blocks=1 00:35:46.205 00:35:46.205 ' 00:35:46.205 10:14:19 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:46.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.205 --rc genhtml_branch_coverage=1 00:35:46.205 --rc genhtml_function_coverage=1 00:35:46.205 --rc genhtml_legend=1 00:35:46.205 --rc geninfo_all_blocks=1 00:35:46.205 --rc geninfo_unexecuted_blocks=1 00:35:46.205 00:35:46.205 ' 00:35:46.205 10:14:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:46.205 10:14:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.205 10:14:19 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.205 10:14:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.205 10:14:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.205 10:14:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.205 10:14:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:46.205 10:14:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.205 10:14:19 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:46.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:46.206 /tmp/:spdk-test:key0 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.206 10:14:19 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:46.206 10:14:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:46.206 /tmp/:spdk-test:key1 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2938652 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2938652 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2938652 ']' 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.206 10:14:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.206 10:14:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:46.206 [2024-11-20 10:14:19.768672] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:35:46.206 [2024-11-20 10:14:19.768724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938652 ] 00:35:46.464 [2024-11-20 10:14:19.842219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.464 [2024-11-20 10:14:19.884156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.723 [2024-11-20 10:14:20.095129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.723 null0 00:35:46.723 [2024-11-20 10:14:20.127188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:46.723 [2024-11-20 10:14:20.127551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:46.723 653986961 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:46.723 15477133 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2938660 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2938660 /var/tmp/bperf.sock 00:35:46.723 10:14:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2938660 ']' 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.723 10:14:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.723 [2024-11-20 10:14:20.202416] Starting SPDK v25.01-pre git sha1 c02c5e04b / DPDK 24.03.0 initialization... 00:35:46.723 [2024-11-20 10:14:20.202460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938660 ] 00:35:46.723 [2024-11-20 10:14:20.277510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.982 [2024-11-20 10:14:20.319588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.982 10:14:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.982 10:14:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:46.982 10:14:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:46.982 10:14:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:46.982 10:14:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:46.982 10:14:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.241 10:14:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.241 10:14:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.499 [2024-11-20 10:14:20.948753] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:47.499 nvme0n1 00:35:47.499 10:14:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:47.499 10:14:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:47.499 10:14:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:47.499 10:14:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:47.499 10:14:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:47.499 10:14:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.759 10:14:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:47.759 10:14:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:47.759 10:14:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:47.759 10:14:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:47.759 10:14:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.759 10:14:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:47.759 10:14:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@25 -- # sn=653986961 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 653986961 == \6\5\3\9\8\6\9\6\1 ]] 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 653986961 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:48.018 10:14:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.018 Running I/O for 1 seconds... 00:35:49.396 21920.00 IOPS, 85.62 MiB/s 00:35:49.396 Latency(us) 00:35:49.396 [2024-11-20T09:14:22.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:49.396 nvme0n1 : 1.01 21920.44 85.63 0.00 0.00 5820.40 1919.27 7084.13 00:35:49.396 [2024-11-20T09:14:22.978Z] =================================================================================================================== 00:35:49.396 [2024-11-20T09:14:22.978Z] Total : 21920.44 85.63 0.00 0.00 5820.40 1919.27 7084.13 00:35:49.396 { 00:35:49.396 "results": [ 00:35:49.396 { 00:35:49.396 "job": "nvme0n1", 00:35:49.396 "core_mask": "0x2", 00:35:49.396 "workload": "randread", 00:35:49.396 "status": "finished", 00:35:49.396 "queue_depth": 128, 00:35:49.396 "io_size": 4096, 00:35:49.396 "runtime": 1.005819, 00:35:49.396 "iops": 21920.444930946822, 00:35:49.396 "mibps": 85.62673801151102, 00:35:49.396 "io_failed": 0, 00:35:49.396 "io_timeout": 0, 00:35:49.396 "avg_latency_us": 5820.399651669086, 00:35:49.396 "min_latency_us": 1919.2685714285715, 00:35:49.396 "max_latency_us": 7084.129523809524 00:35:49.396 } 00:35:49.396 ], 00:35:49.396 "core_count": 1 00:35:49.396 } 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.396 10:14:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:49.396 10:14:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:49.396 10:14:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.396 10:14:22 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.396 10:14:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:49.656 [2024-11-20 10:14:23.125812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:49.656 [2024-11-20 10:14:23.126750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2560a70 (107): Transport endpoint is not connected 00:35:49.656 [2024-11-20 10:14:23.127745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2560a70 (9): Bad file descriptor 00:35:49.656 [2024-11-20 10:14:23.128747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:49.656 [2024-11-20 10:14:23.128756] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:49.656 [2024-11-20 10:14:23.128763] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:49.656 [2024-11-20 10:14:23.128772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:49.656 request: 00:35:49.656 { 00:35:49.656 "name": "nvme0", 00:35:49.656 "trtype": "tcp", 00:35:49.656 "traddr": "127.0.0.1", 00:35:49.656 "adrfam": "ipv4", 00:35:49.656 "trsvcid": "4420", 00:35:49.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.656 "prchk_reftag": false, 00:35:49.656 "prchk_guard": false, 00:35:49.656 "hdgst": false, 00:35:49.656 "ddgst": false, 00:35:49.656 "psk": ":spdk-test:key1", 00:35:49.656 "allow_unrecognized_csi": false, 00:35:49.656 "method": "bdev_nvme_attach_controller", 00:35:49.656 "req_id": 1 00:35:49.656 } 00:35:49.656 Got JSON-RPC error response 00:35:49.656 response: 00:35:49.656 { 00:35:49.656 "code": -5, 00:35:49.656 "message": "Input/output error" 00:35:49.656 } 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@33 -- # sn=653986961 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 653986961 00:35:49.656 1 links removed 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@33 -- # sn=15477133 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 15477133 00:35:49.656 1 links removed 00:35:49.656 10:14:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2938660 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2938660 ']' 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2938660 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938660 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938660' 00:35:49.656 killing process with pid 2938660 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 2938660 00:35:49.656 Received shutdown signal, test time was about 1.000000 seconds 00:35:49.656 00:35:49.656 Latency(us) 00:35:49.656 [2024-11-20T09:14:23.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.656 [2024-11-20T09:14:23.238Z] =================================================================================================================== 00:35:49.656 [2024-11-20T09:14:23.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.656 10:14:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 2938660 00:35:49.915 10:14:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2938652 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2938652 ']' 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2938652 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938652 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938652' 00:35:49.915 killing process with pid 2938652 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 2938652 00:35:49.915 10:14:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 2938652 00:35:50.174 00:35:50.174 real 0m4.317s 00:35:50.174 user 0m8.065s 00:35:50.174 sys 0m1.487s 00:35:50.174 10:14:23 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.174 10:14:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:50.174 ************************************ 00:35:50.174 END TEST keyring_linux 00:35:50.174 ************************************ 00:35:50.433 10:14:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:50.433 10:14:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:50.433 10:14:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:50.433 10:14:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:50.433 10:14:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:50.433 10:14:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:50.433 10:14:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:50.433 10:14:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.433 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:35:50.433 10:14:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:50.433 10:14:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:50.433 10:14:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:50.433 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:35:55.699 INFO: APP EXITING 00:35:55.699 INFO: killing all VMs 00:35:55.699 INFO: killing vhost app 00:35:55.699 INFO: EXIT DONE 00:35:58.233 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:58.233 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.233 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:01.520 Cleaning 00:36:01.520 Removing: /var/run/dpdk/spdk0/config 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:01.520 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:01.520 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:01.521 Removing: /var/run/dpdk/spdk1/config 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:01.521 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:01.521 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:01.521 Removing: /var/run/dpdk/spdk2/config 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:01.521 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:01.521 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:01.521 Removing: /var/run/dpdk/spdk3/config 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:01.521 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:01.521 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:01.521 Removing: /var/run/dpdk/spdk4/config 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:01.521 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:01.521 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:01.521 Removing: /dev/shm/bdev_svc_trace.1 00:36:01.521 Removing: /dev/shm/nvmf_trace.0 00:36:01.521 Removing: /dev/shm/spdk_tgt_trace.pid2457345 00:36:01.521 Removing: /var/run/dpdk/spdk0 00:36:01.521 Removing: /var/run/dpdk/spdk1 00:36:01.521 Removing: /var/run/dpdk/spdk2 00:36:01.521 Removing: /var/run/dpdk/spdk3 00:36:01.521 Removing: /var/run/dpdk/spdk4 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2454971 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2456048 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2457345 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2457993 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2458934 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2459186 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2460155 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2460163 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2460515 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2462251 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2463526 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2463827 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2464118 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2464433 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2464723 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2464975 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2465221 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2465514 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2466252 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2469251 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2469510 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2469772 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2469786 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2470270 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2470283 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2470771 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2470886 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2471245 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2471282 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2471538 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2471545 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2472107 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2472355 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2472663 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2476383 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2480774 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2490915 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2491610 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2496601 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2496876 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2501147 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2507032 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2509640 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2520076 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2529005 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2530815 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2531761 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2549157 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2553231 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2599668 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2604996 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2610842 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2617342 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2617344 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2618136 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2618958 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2619875 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2620432 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2620561 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2620788 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2620806 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2620843 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2621723 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2622634 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2623553 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2624019 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2624062 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2624418 00:36:01.521 Removing: /var/run/dpdk/spdk_pid2625492 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2626482 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2634625 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2663540 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2668213 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2669812 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2671777 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2671995 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2672303 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2672642 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2673149 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2674812 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2675766 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2676129 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2678368 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2678809 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2679361 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2683645 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2689261 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2689262 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2689263 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2693039 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2701424 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2705446 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2711435 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2712745 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2714290 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2715870 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2720842 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2725240 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2729202 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2736695 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2736803 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2741466 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2741734 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2741882 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2742220 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2742244 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2746933 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2747504 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2752055 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2754603 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2759997 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2765410 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2774644 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2781649 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2781653 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2800489 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2801127 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2801604 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2802082 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2802817 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2803454 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2803983 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2804462 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2808712 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2808953 00:36:01.781 Removing: /var/run/dpdk/spdk_pid2815519 00:36:01.782 Removing: /var/run/dpdk/spdk_pid2815581 00:36:01.782 Removing: /var/run/dpdk/spdk_pid2821041 00:36:01.782 Removing: /var/run/dpdk/spdk_pid2825280 00:36:01.782 Removing: /var/run/dpdk/spdk_pid2835023 00:36:01.782 Removing: /var/run/dpdk/spdk_pid2835711 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2839750 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2840161 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2844433 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2850101 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2852700 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2863162 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2872061 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2873662 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2874583 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2890777 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2894744 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2897442 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2905406 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2905411 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2911184 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2913144 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2915115 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2916166 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2918131 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2919411 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2928168 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2928648 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2929315 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2931591 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2932068 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2932621 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2936576 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2936584 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2938097 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2938652 00:36:02.041 Removing: /var/run/dpdk/spdk_pid2938660 00:36:02.041 Clean 00:36:02.041 10:14:35 -- common/autotest_common.sh@1453 -- # return 0 00:36:02.041 10:14:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:02.041 10:14:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.041 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:36:02.041 10:14:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:02.041 10:14:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.041 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:36:02.301 10:14:35 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:02.301 10:14:35 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:02.301 10:14:35 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:02.301 10:14:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:02.301 10:14:35 -- spdk/autotest.sh@398 -- # hostname 00:36:02.301 10:14:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:02.301 geninfo: WARNING: invalid characters removed from testname! 00:36:24.365 10:14:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:26.271 10:14:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:27.650 10:15:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:29.557 10:15:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:31.463 10:15:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:33.369 10:15:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:35.274 10:15:08 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:35.274 10:15:08 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:35.274 10:15:08 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:35.274 10:15:08 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:35.274 10:15:08 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:35.274 10:15:08 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:35.274 + [[ -n 2377950 ]] 00:36:35.274 + sudo kill 2377950 00:36:35.284 [Pipeline] } 00:36:35.301 [Pipeline] // stage 00:36:35.308 [Pipeline] } 00:36:35.323 [Pipeline] // timeout 00:36:35.329 [Pipeline] } 00:36:35.343 [Pipeline] // catchError 00:36:35.350 [Pipeline] } 00:36:35.365 [Pipeline] // wrap 00:36:35.372 [Pipeline] } 00:36:35.386 [Pipeline] // catchError 00:36:35.396 [Pipeline] stage 00:36:35.399 [Pipeline] { (Epilogue) 00:36:35.414 [Pipeline] catchError 00:36:35.416 [Pipeline] { 00:36:35.430 [Pipeline] echo 00:36:35.432 Cleanup processes 00:36:35.438 [Pipeline] sh 00:36:35.724 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:35.724 2949907 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:35.740 [Pipeline] sh 00:36:36.047 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.047 ++ grep -v 'sudo pgrep' 00:36:36.047 ++ awk '{print $1}' 00:36:36.047 + sudo kill -9 00:36:36.047 + true 00:36:36.060 [Pipeline] sh 00:36:36.344 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:48.569 [Pipeline] sh 00:36:48.857 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:48.858 Artifacts sizes are good 00:36:48.872 [Pipeline] archiveArtifacts 00:36:48.879 Archiving artifacts 00:36:49.000 [Pipeline] sh 00:36:49.285 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:49.300 [Pipeline] cleanWs 00:36:49.310 [WS-CLEANUP] Deleting project workspace... 00:36:49.310 [WS-CLEANUP] Deferred wipeout is used... 00:36:49.317 [WS-CLEANUP] done 00:36:49.319 [Pipeline] } 00:36:49.337 [Pipeline] // catchError 00:36:49.350 [Pipeline] sh 00:36:49.660 + logger -p user.info -t JENKINS-CI 00:36:49.700 [Pipeline] } 00:36:49.713 [Pipeline] // stage 00:36:49.719 [Pipeline] } 00:36:49.733 [Pipeline] // node 00:36:49.739 [Pipeline] End of Pipeline 00:36:49.778 Finished: SUCCESS